Implementing Peer Technical Reviews In a Large-sized Database Course John A. Mendonca jamendonca@tech.purdue.edu School of Technology, Purdue University West Lafayette, IN 47907-1421, U.S.A. Abstract The peer technical review is a quality assurance activity that has been proven to be valuable in producing better quality software. With careful planning and instruction, a student peer review process can be designed and implemented so that students can learn about, and practice, this process within the classroom. This paper discusses the value of peer reviews in a classroom setting, the challenges to implementation, and how they can be integrated into a large-sized database design course. Keywords: Peer review, collaborative learning, software quality, database curriculum 1. INTRODUCTION Beginning with the 1997-98 academic year, in preparation for an upcoming re-accreditation review, the School of Technology focused increased attention on quality in curriculum design and delivery, and on continuous improvement processes. As a group, the faculty re-examined the overall curriculum design and individual course models, trying to ensure that they were "doing the right thing" in terms of fulfilling their mission. Individual faculty members, responsible for achieving course goals and objectives for courses they taught, reviewed course plans and pedagogy to ensure they were "doing things right." As a part of this effort, the faculty established a framework for identifying needs and reporting continuous improvement in course delivery. In addition to specific content requirements, the course model for the Data Analysis and Database Design course includes valuable, less tangible, learning objectives--for example, improving interpersonal skills, presentation skills, and group work skills, and increasing knowledge about professional "best practices" techniques. In addition to individual practical laboratory assignments, the course already included a team project that attempted to meet these objectives. A technical peer review component was added to several of the existing laboratory assignments beginning with the Spring 1999 semester. The component was added specifically to improve learning among the students and to meet the social and professional skills and content objectives described above. It was expected that using peer technical reviews would have both process (learning) and product (improved quality) benefits for the students in the course. The intended objectives included the following: * Improved learning through paired discussion of concepts and sharing of information on how concepts are successfully applied. * Improved quality of products submitted for grading. * Improved understanding of the information systems environment, particularly peer technical reviews as a quality control technique. This paper discusses the value of peer reviews in a classroom setting, the challenges to effective implementation, and how they have been integrated into a large-sized database design course. 2. PEER REVIEW NOT EVALUATION Cooperative learning (sometimes called "collaborative learning") has gained in popularity in the past decade. An instructional approach based in active learning theory, it emphasizes student participation, communication, and development of interpersonal and analytical skills through team activities. In its basic form, it can be defined as an instructional technique that requires students to work together in small groups on structured learning tasks (Cooper 1995). Students help each other learn by contributing to a common task. Project 2061, sponsored by The National Council on Science and Technology Education, advocates special attention to thinking skills, such as analyzing information, communicating ideas, and making logical arguments (Cohen 1994). When applied with training and planning, the peer technical review can be very effective in developing such skills. When considering collaborative education theory, there is an important caveat that must be observed. A literature search by the author indicates that within the context of education theory, the terms "peer review" and "peer evaluation" are often used interchangeably in writings about peer group activities. Regardless of which term is used, writers regularly focus on peer assessment for grading, rating, or ranking student performance. Within the information systems arena, however, the peer technical review is something quite different. Internationally recognized software engineers, such as Roger Pressman, have insisted that technical reviews are about the product, not about the producer (Pressman 1997). A review is an effective quality assurance tool for uncovering defects in a product. It is in this professional context, and with the emphasis on product improvement, that peer technical reviews were implemented in the course. A database design course is a particularly good match for collaborative learning techniques. Designing databases is a conceptual, logical undertaking in which many dependent and independent elements are integrated to form a data model that reflects the information needs of an enterprise. Group work is particularly effective for learning when the task requires conceptual thinking rather than memorization or application of a set rule (Cohen 1994). While database design does have set rules, there is not always one right answer--some designs will be better than others. The designer must be able to explain and justify logical decisions that lead to logical, and then physical, designs. Peer review is valuable for any software engineering course, regardless of the class size. It is especially useful in a large-sized class in which students are developing several complex enterprise designs from multiple scenarios, and the instructor cannot offer each student a detailed review and discussion of the many elements that contribute to just one good design. The one-on-one peer technical review, as implemented for this course, supports conceptual learning and provides analytical training and useful feedback while enabling students to improve their product before the instructor evaluates it. To successfully meet the objectives noted earlier, students must feel comfortable that the process is about learning, feedback, and quality improvement--and not about judgment--for all the participants. Just as in the work environment, the classroom technical review can be effectively used as a buffer against the judgment day when the product is finally implemented (here, submitted to the instructor) and awaits thorough scrutiny and evaluation (those "production" runs will tell all!). But, also like in the work environment, to succeed with peer technical reviews students do need to learn the process, understand its value, and master the protocol and mechanics of an effective process. 3. PRACTICAL VALUE OF PEER TECHNICAL REVIEWS In addition to pedagogical value, there are two major practical benefits to implementing the reviews: first, practicing a proven quality-control technique; and, second, improved product quality. Formal and informal peer reviews of software (implemented with varying sets of protocols and called by various names, such as "walk-throughs", "inspections" or "Formal Technical Reviews") have long been an accepted quality assurance tool for software design and development. Industry studies (TRW, Nippon Electric, and others) report that design activities during software development introduce between 50 and 65 percent of all errors, and that formal review techniques may be up to 75 percent effective in uncovering design flaws (Pressman 1997). Defect amplification modeling has confirmed that remaining undiscovered design errors have a compounding and costly impact on the product. The widely accepted Capability Maturity Model of the Software Engineering Institute--a model for developing effectiveness in software engineering processes--includes peer reviews as a Level 3 key process area. And much of the data needed to support a Level 4 maturity comes from peer reviews (Paulk 2000). Because of the way assignments are handled, students rarely understand the complexities and importance of quality control and rarely have the opportunity to practice commonly used "best practices" techniques for error discovery. Generally, students are discouraged or warned against having other students review their products before handing them in for grading--that's "cheating!". The information systems professional who doesn't avail himself of the opportunity to have others examine his work before production implementation may put himself and his company at risk--possibly at very serious risk. Of course not all tasks need formal technical reviews. But because it is a valuable professional tool that is so rarely a part of common education practice, it is worthy of consideration for inclusion in information systems courses wherever practical and appropriate. The second practical benefit to students--one that is very close to their hearts--is the opportunity to improve the quality of their products before turning them in for grading. A non-judgmental peer technical review offers students a chance to re-consider and improve or correct design decisions, interpretation of specifications, omission of functionality, misapplication of concepts, or failure to comply with standards. For this course, peer reviews of assignment materials were scheduled one day prior to the actual product due date. The review inspected the entire packet of deliverables and no penalties were assessed for corrections made, regardless of type, quantity, or severity. 4. PREPARATION AND IMPLEMENTATION: OVERCOMING THE CHALLENGES Four major challenges to implementing peer technical reviews in a large class setting are: 1) training students to conduct effective reviews; 2) designing the environment; 3) controlling the process to achieve the intended benefits; and 4) the educational culture itself. The first challenge is to train students to conduct reviews effectively, fairly, and sensitively. The specific protocol for conducting a peer review in a work environment is a product of circumstances, history, experience, and culture. Even if there are written rules of conduct--which is unlikely--the primary means of education for new employees is participation and observation. This cannot be the case for students beginning to practice quality assurance through peer reviews. Because peer reviews are not part of the normal student experience, and the peers participating in the reviews are themselves unlikely to be experienced in the process, formal instruction and guidelines for conduct are a critical part of implementing the process. An introductory exercise to build student confidence in teamwork may be the most beneficial first step (Siciliano, 1999). Instruction should continue by explaining the importance and value of the peer review within the professional information systems environment. If the concept has not already been discussed, this may be a good opportunity to introduce it within the context of software engineering and best practices in quality assurance. Two things must be impressed upon the students. First, that they are not conducting an evaluation of a peer, but rather participating in a process for quality improvement. And second, that if the process is effective it will benefit the author because it will lead to a better product (in other words, a better grade!). The current guidelines used for preparation and instruction are shown in Appendix A. The second challenge is in designing an effective review process within a large classroom environment. For this course, the primary design construct was dividing the class into teams of four and then conducting the reviews in sub-teams of two at a time. This structure, which requires planning at the beginning of the course, has been implemented in classes of up to 84 students. The team of four, identified early in the course, participate together in the review process and later for a full four-member group project. Because the reviews are conducted during the first half of the course, and the team project is assigned in the second half, the reviews serve an additional purpose of having students begin to know (and, one hopes, trust) their teammates. For the three peer reviews, each team member alternates pairing with a different team member. In each team, the author presents his product to the reviewer, and then serves as reviewer for the other person's product. For database design assignments, students are presented with business case scenarios that require logical and physical database solutions designed to support the enterprise and which reflect business rules. They use a CASE tool (ERwin database software from Computer Associates) to model their designs. Although the models are a critical part of the product package to be reviewed, they are only part of a set of deliverables--including, for example, a cover memo, design assumptions and required reports generated by the CASE tool. For this course, conducting reviews in two-person teams best met the objectives within existing constraints--specifically time, controls, and grading resources. Each review was limited to no more than twenty-five minutes. In addition to the regularly scheduled three hours per week "lecture" class period, the course has an application-oriented fifty minutes per week Practice/Study/Observation (PSO) session. Each team of two conducts its peer reviews (one for each participant) during the scheduled PSO period. Students are divided into PSO groups of up to 30 students. A student assistant monitors a group of about ten to fifteen teams and offers consultation and advice, if needed, after both sets of reviews are completed. Two versions of a database design problem/scenario are assigned. One person on the team is assigned Version A and the other Version B. Both assignments have the same deliverables, illustrate the same concepts, and have the same level of complexity, but the different business scenarios require different design solutions. The third challenge to effective implementation is controlling the process--making it fair and effective for all students, and holding students individually accountable for this collaborative effort. Although this challenge is the same as for any other collaborative work, it may be perceived as the major impediment to wider use of the peer technical review in the classroom. The most common problem is when a student does not present a complete product for evaluation at the time of the review. It is difficult to impress upon students the concept of a product that is "sort of due" one day before its "final" (this submission counts for grading) due date. But, consistent with the principles of the professional technical review that this emulates, there should be no penalty for errors or defects uncovered during the review and corrected before submission. For the first review session, students may not have a complete product due to lack of understanding of the requirements. And some students may intentionally use the review session as a crutch to support inadequate effort and sloppy analysis. No matter what, there will always be some who will take this approach. Also, it is not always easy to distinguish between a product that shows sufficient effort but has errors, and one that shows poorly applied or insufficient effort. There is some indication that students may increase their efforts when they know other students will be seeing their work, although there has been no attempt to confirm this. Other programs that instituted peer reviews have also informally noted this tendency (Sullivan 1998). To make an attempt at rewarding those who do present a complete package at the time of the review, a portion of the grade for the assignment (25%) is assigned at the time of the review. During the PSO class, the monitor looks through the product, compares it against the list of deliverables, and determines whether it was adequately prepared for presentation--regardless of design or other types of defects. This has been somewhat effective (see the following section on evaluation) in motivating students to present adequately prepared work for the review. A fourth major challenge to implementation is the educational culture itself. As noted earlier, students are just not used to sharing their product with other students (except perhaps informally with friends). To some extent, working in teams has fostered a more collaborative environment. But, most commonly, since students work on different parts of a project, there is not the kind of direct evaluative interaction that is required in a technical review, the mission of which is uncovering defects. The important issues of privacy and concerns about personal competency are not unlike those that must be faced in a work environment, so students should not be sheltered from them. These possibilities highlight the importance of adequate, regularly reinforced, education in conducting the reviews. But regardless of the preparation, appreciation for the value of the review may not be realized until a significant win takes place--when the student has averted a major problem because of the review process. 5. EVALUATING THE PROCESS It appears that implementing peer technical reviews in the course have met the three objectives listed in the Introduction. The first two objectives, improved learning and better quality products, are verified by student comments and by surveys conducted at the end of the course. The most recent survey yielded two significant results. First, 87% of the students either "strongly agreed" or "agreed somewhat" that they had made at least one important change in their product before submitting it. An important change was described as one that would have cost them points in grading. Second, 78% of the students felt that the review process "contributed to my learning and understanding the course material." An informal assessment, comparing skills demonstrated on the final tests in courses before and after peer review implementation, also supports the learning value of the process. The third objective, improved understanding of the information systems environment and the role of the technical review in quality improvement, has not been systematically assessed. But it is experientially validated in that, by practicing a quality improvement process, students, by their own evaluation, effectively improved the quality of their products. No amount of lecturing about quality processes could effectively replace this direct learning experience of the process and its value. 6. CONCLUSION Implementing a peer technical review process in an information systems course can have demonstrated, verifiable value. When well managed, it can be especially effective in a course that requires learning and applying complex concepts such as in a database design course. As a collaborative learning mechanism, it helps students communicate, correct, clarify, and accurately apply concepts in a large-sized course where individual attention by an instructor is limited. Students learn the value of a proven quality-improvement technique that is used in the work environment and have an opportunity to improve their product before it is evaluated by the instructor, thereby creating a win-win situation for students, the instructor, and for future employers! 7. REFERENCES Cohen, Elizabeth G., 1994, Designing Groupwork: Strategies for the Heterogeneous Classroom. (2nd ed.) Teachers College Press, New York. Cooper, James L., et al., 1990, Cooperative Learning and College Instruction: Effective Use of Student Learning Teams. Institute of Teaching and Learning, Long Beach, CA. Paulk, M., 2000, "Thinking About Peer Reviews". Software Quality, No. 3, Spring 2000, pp. 9-10. Pressman, Roger S., 1997, Software Engineering: A Practitioner's Approach. McGraw-Hill Companies, Inc., New York. Siciliano, Julie, 1999, "A Template for Managing Teamwork in Courses Across the Curriculum". Journal of Education for Business, May/Jun 1999, pp. 261-264. Sullivan, Dave, Carole E. Brown, and Norma L. Nielson, 1998, "Computer-mediated Peer Review of Student Papers." Journal of Education for Business, Nov/Dec 1998, pp. 117-119. APPENDIX A: PEER TECHNICAL REVIEW GUIDELINES General * Review the product, not the producer. * The purpose of the review is to uncover possible problem areas and errors; it is not to redesign the product or correct errors during the review session. * Civility rules. You will perform both roles (reviewer and the author of the reviewed product), so practice the process and be aware, fair, and sensitive. To the AUTHOR * Lead the review. If there is a list of deliverables, present each item on the list, giving the reviewer ample time to examine the product. * Give the reviewer time to read/review the design specifications. Present an overview of your design and explain how it solves the problem. The reviewer's function is to ask questions, so be sure to allow time for that. * You are responsible for the product no matter what is brought up during the review. Evaluate the information collected during the review. Beware of the assertive reviewer who could lead you down the wrong path! To the REVIEWER * Ask questions. Don't challenge. * Examine each item for the following: commission errors, omission errors, misinterpretations, thoroughness (for example, that all deliverables are included in the product package) and adherence to class standards. * You cannot insist that the author change any part of his/her product.