James C. Johnson Student Paper Competition
2022 Winner Chen Tang
Named for James C. Johnson, a founding member of the IPMAAC board, IPAC offers its annual James C. Johnson Student Paper Competition every year to recognize the contributions of students in the field of personnel assessment.
James C. Johnson was a native of Whittier, CA and studied at both the University of Michigan and the University of Minnesota. He taught in the Psychology Department at the University of Tennessee in the early 70's, leaving for what he planned to be a temporary role in state government in 1972. Well..."temporary" turned out to be over 30 years, as Jim retired as Director of the Research Division of the Tennessee Department of Personnel (DOP). Jim was the very model of the scientist/practitioner. His accomplishments within the Tennessee DOP are far too numerous to mention; he shaped the system there into a true merit system, turning sound science into innovative and workable practical systems. Jim served as president of IPMAAC for 18 months in 1994-1995. The high regard in which Jim was held by his peers is demonstrated by his receiving the Clyde Lindley Exemplary Service Award in 1997 and the Stephen E. Bemis Memorial Award in 1999.
Jim Johnson was absolutely committed to designing and using job-related, fair and innovative assessment methods. He actively shared his work informally and as an author and trainer, regularly presenting at the annual IPMAAC (now IPAC) conference and many others. In March of 2007, the Student Paper Competition was named in memory of Jim. His support of students and more junior professionals was well known in the organization. Jim would have been very proud to see the caliber of work and research that emerge from this competition.
Students conducting research and analyzing trends report conclusions that contribute to the field of assessment, which directly aligns with the purpose of IPAC: to promote sound, merit-based personnel assessment practices, encourage and provide assistance in efforts to improve assessment practices in such fields as personnel selection, performance evaluation, training, job analysis and organizational effectiveness.
We believe that student contributions to the field of personnel assessment are valuable in furthering these efforts and should be recognized.
The winner of every competition is invited to present his or her paper at the Annual IPAC Conference. The winner receives up to $600.00 in conference related travel expenses for in-person conferences, free conference registration, and a free one-year membership in IPAC. In addition, the winner will be recognized in the IPAC conference program and the IPAC newsletter. Finally, the University Department in which the student completed his or her research will be given a plaque commemorating the student’s IPAC award achievement.
See 2022 rules and process for submission here.
Past International Personnel Assessment Council (IPAC) James C Johnson Student Paper Competition Winners
2022: Chen Tang, University of Illinois, "Shrinkage of Diversity Tradeoff Curves in Personnel Selection: A Comparison of Local Validity Studies, Meta-Analysis, Bayes-Analysis, and Ensemble Machine Learning"
2021: Tianjun Sun, University of Illinois, "Forced choice to solve the cross-cultural response style bias problem"
2020: Bo Zhang, University of Illinois, "Though Forced, Still Valid: Psychometric Equivalence of Forced-Choice and Single-Statement Measures"
2019: Annie Kato, Baruch College, City University of New York, "Cognitive Ability Tilt and Job Performance: A Case for Specialization"
2018: Jacob Bradburn, Michigan State University, “Personality Validity in Predicting Job Performance: How Much Does Context Matter?”
2017: Q. Chelsea Song, University of Illinois, “Diversity Shrinkage: Cross-Validating Pareto-Optimal Weights to Enhance Diversity via Hiring Practices.”
2016: David Glerum, University of Central Florida, “The Trainer Matters: Cross-Classified Models of Trainee Reactions.”
2015: Mengyang Cao, University of Illinois, “Examining the Process Underlying Responses to Personality Measures in High-Stakes Situations: Does the Item Response Process Matter?”
2014: Christopher Kenny Adair, DePaul University. "Interventions for addressing faking on personality tests for employee selection: A meta-analysis"
2013: Rachael Klein, University of Minnesota, “Cognitive Predictors and Age-based Adverse Impact Among Business Executives.”
2012: Garett N. Howardson, George Washington University, “Coming Full Circle with Coachability and Fakability of Personality Reactions: Toward an Understanding of Affective Training Reactions through the Core Affect Circumplex.”
2011: Christopher D. Nye, University of Illinois, “Vocational Interests and Performance: A Quantitative Summary of 60 Years of Research.”
2010: In-Sue Oh, University of Iowa, “The Five-Factor Model of Personality and Job Performance in East Asia: A Cross-Cultural Validity Generalization Study.”
2009: Stephan Dilchert, University of Minnesota, “Assessment Center Dimensions: Individual Differences Correlates and Meta-Analytic Incremental Validity.”
2008: Greet Van Hoye, Ghent University, Belgium, “Tapping the Grapevine: Investigating Determinants and Outcomes of Word-of-Mouth as a Recruitment Source.”
2007: Jeffrey M. Cucina, George Washington University, “A Comparison of Alternative Methods of Scoring a Broad-Bandwidth Personality Inventory to Predict Freshman GPA.”
2006: Rustin D. Meyer, Purdue University, “Situational Moderators of the Conscientiousness-Performance Relationship: An Interactional Meta-Analysis.”
2005: Jalane M. Meloun, University of Akron, “Computer Anxiety: A Possible Threat to the Predictive Validity of Computerized Tests.”
2004: Kevin M. Bradley, Virginia Tech, “Are Personality Scale Correlations Inflated in Job Applicant Samples?”
2003: David L. Van Rooy, Florida International University, “Emotional Intelligence: A Meta-Analytic Investigation of Predictive Validity and Nomological Net with GMA and the Big Five Factors of Personality.”
2002: Mark N. Bing, University of Tennessee “Incremental Validity of the Frame-of-Reference Effect in Personality Scale Scores: A Replication and Extension.”
2001: Mitchell Gold, Illinois Institute of Technology, "SME Judgments in the Angoff Procedure: The Impact of Content Relevance and Item Format."
2000: Filip Lievens, University of Ghent, "Assessor Training Strategies and Their Effects on Inter- rater Reliability, Discriminant Validity, and Accuracy."
1999: Michelle A. Dean, Louisiana State University, “A Response Option Examination of Biodata Adverse Impact and Criterion.”
1998: Amie D. Gee, University of Akron, “Harnessing the Predictive Power of Conscientiousness: A Validity Study of Biographical Data Measure.”
1997: Gary J. Greguras & Chet Robie, Bowling Green State University, “Comparing Measurement Error of 360-degree Feedback Ratings across Dimensions and Rating Sources.”
1996: Jaliza Cader, The University of Tennessee at Chattanooga, “Reactions of Simulated Job Applicants to a Personality Inventory.”
1995: Jennifer Verive, University of Akron, “Short-Term Memory Tests in Personnel Selection: Adverse Impact and High Validity.”
1994: Jennifer Burnett, University of Florida, “Utilization and Validity of Non-Verbal Cues in the Structured Interview.”
1993: Deniz S. Ones and Chockalingham Viswesvaran, University of Iowa, “Meta-Analysis of Integrity Test Validities: Findings and Implications for Personnel Selection and Theories of Job Performance.”
1992: Louis Forbringer, University of Akron, “The Role of Availability Heuristic in Biasing Task Description Ratings of the Position Analysis Questionnaire.”
1991: David A. Dye, George Washington University, “Construct Validity of a Biographical Data Inventory: A Confirmatory Factor Analysis.”
1990: Phyllis A. Kuehn, Georgia College of Education, “Evaluation of the Response Validity and Reliability of a Teacher Licensure Test Job Analysis.”
1989: Thomas W. Mason, University of Tennessee, “Replacing the Employment Interview with Bio Data: A Written Structured Interview with Modeled Decisions.”
1988: Juan I. Sanchez, University of South Florida, “Determining Important Tasks Within Jobs: A Policy Capturing Approach.”
1987: NO AWARD GIVEN
1986: Michael A. McDaniel, George Washington University, "The Evaluation of a Casual Model of Job Performance: The Interrelationships of General Mental Ability, Job Experience, and Job Performance."
1985: Anne Marie Carlisi, University of Akron, “The Influence of Sex Stereotyping and the Sex of the Job Evaluator on Job Evaluation Ratings.”
1984: NO AWARD GIVEN
1983: Dennis Doverspike, University of Nebraska-Omaha, "A Statistical analysis of Internal Sex Bias in a Job Evaluation Instrument."
1982: Kenneth Perlman, George Washington University, "The Bayesian Approach to Validity Generalization: A Systematic Examination of the RobusCAess of Procedures and Conclusions."