(1) Prof Sacha Epskamp
University of Amsterdam, Netherlands
Network Psychometrics: Current State and Future Directions
The novel field of network psychometrics focuses on the estimation of network models aiming to capture interactions between observed variables. In this presentation, I will introduce this field and its main recent advances, and I will discuss future directions and challenges the field has yet to face. First, I will discuss the estimation of network from datasets ranging from data with independent cases (e.g., cross-sectional data) to datasets of multiple time-series. Second, I will discuss the formalization of network models as formal psychometric models, which allows for their combination with the general frameworks of structural equation modeling and item-response theory. I will discuss model equivalences between network and factor models and generalizations of network models that encompass latent variable structures. Finally, I will discuss future directions in network psychometrics such as the handling of missing data, ordinal data, network-based adaptive assessment and forming network models using theoretical knowledge.
Sacha Epskamp is an assistant professor at the University of Amsterdam, department of Psychological Methods, and research fellow at the Institute for Advanced studies of the University of Amsterdam. In 2017, Sacha Epskamp completed is PhD on network psychometrics – estimating network models from psychological datasets and equating these to established psychometric modeling techniques. He has implemented these methods in several software packages now routinely used in diverse fields of psychological research. Sacha Epskamp teaches on multivariate statistics and data science, and his research interests involve reproducibility, complexity, time-series modeling, and dynamical systems modeling. In addition to the psychometric society dissertation prize, Sacha Epskamp has received several rewards for his research, including the Leamer-Rosenthal Price for Open Science (2016).
(2) Prof Jonathan Templin
University of Iowa, USA
Building a Diagnostic Model-Based Formative Assessment System for Personalizing Educational Progress
Personalized learning, despite its perhaps over-use as a term, is a topic of great interest in a number of educational, fields, from learning sciences to assessment. Often, the term is used to describe how some product allows for near-instant results that help understand what students should be studying. Often lost in the pursuit of personalized learning systems is the role of the teacher. In this presentation, I describe efforts to build a formative assessment system to empower teachers by providing up-to-the-minute information about what their students know. The system implements versions of recently developed diagnostic psychometric models that, when paired with small, regularly administered progress assessments, can provide an accurate and up-to-date snapshot of students’ knowledge states. Further, this system could be enhanced by a prediction system enabling multiple measures to add to student estimates. The system seeks to provide richer, more detailed student feedback to be given to a teacher in order to help decide upon what would be best for each student’s educational progress.
Jonathan Templin is Professor and E. F. Lindquist Chair in the Department of Psychological and Quantitative Foundations at the University of Iowa. Dr. Templin received his Ph.D. in Quantitative Psychology at the University of Illinois at Urbana-Champaign in 2004, where he also received an M.S. in Statistics in 2002. He joined the faculty of the University of Iowa in January 2019, after stints on the faculty at the University of Kansas, the University of Nebraska-Lincoln, and the University of Georgia. The main focus of Dr Templin’s research is in the field of diagnostic classification models—psychometric models that seek to provide multiple actionable and reliable scores from educational and psychological assessments. He also studies Bayesian statistics, as applied in psychometrics, broadly. Dr Templin’s research program has been funded by the United States National Science Foundation and Institute of Education Sciences and has been published in journals such as Psychometrika, Psychological Methods, Applied Psychological Measurement, and the Journal of Educational Measurement. In 2014, he was elected as a member of the Society of Multivariate Behavioral Research. Dr Templin is currently an outgoing co-editor of the Journal of Educational Measurement and an outgoing Associate Editor for Psychometrika. He is co-author of the 2010 book Diagnostic Measurement: Theory, Methods, and Applications, which won the 2012 American Educational Research Association Division D Award for Significant Contribution to Educational Measurement and Research Methodology. He is the winner of the 2015 AERA Cognition and Assessment SIG Award for Outstanding Contribution to Research in Cognition and Assessment and the inaugural 2017 Robert Linn Lecture Award.
(3) Prof Anita Hubley
University of British Columbia, Canada
Contributions of Response Processes to Test Validation and Development
Responding to test items and tasks is a complex human behaviour; response processes involve an interaction among the test taker, test items or tasks, responses or response options, and the testing context (Hubley, 2017; Hubley, 2021; Launeanu & Hubley, 2017). As one of the five sources of validity evidence in the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 1999, 2014), response processes evidence tends to be poorly understood by researchers and under-utilized relative to other sources such as internal structure and relations with other variables (Zumbo & Hubley, 2017). It is critically important in test development and validation work to determine the degree to which the test developer, test user, and test takers interpret the meaning of items or tasks in the same way, use relevant information or appropriate strategies in arriving at their response, and respond in a meaningful way, if their scores are to be meaningful and useful (Hubley, 2021; Launeanu & Hubley, 2017). In recent years, there has been an influx of research incorporating response processes evidence. In this talk, I will define response processes, describe some different forms of response processes evidence, discuss and evaluate its use in the validation of social science measures based on a recent scoping review as well as my recent and current work examining response processes related to positively and negatively worded and keyed items, and provide suggestions for how response processes information can be used in test development, revision, and validation.
Dr. Anita Hubley is a Full Professor and Killam Laureate in the Department of Educational and Counselling Psychology and Special Education at the University of British Columbia (UBC), where she is Coordinator and member of the Measurement, Evaluation, and Research Methodology program, member of the Counselling Psychology program, and Director of the Adult Development and Psychometrics Lab. She received her Ph.D. in Psychology in 1995 with a specialization in Human Assessment. Dr. Hubley is recognized internationally for her expertise in test development, validity, and psychological and health assessment and measurement across the adult lifespan, including with vulnerable populations. She has published over 100 academic articles and book chapters on various topics, including reliability, validity, and the development and validation of measurement instruments. She has also developed several clinical, health, and psychological tests. She has been a principal or co-investigator on numerous grants involving the development or psychometric evaluation of tests, given 110+ presentations or invited addresses at conferences, and has given two workshops for the International Test Commission on evaluating reliability and validity studies. She is an Associate Editor for the new Springer journal Measurement Instruments for the Social Sciences, a section editor and editorial board member for the Encyclopedia of Quality of Life Research, and on the editorial boards of Journal of Psychoeducational Assessment, Social Indicators Research, and the Canadian Journal of School Psychology. She is a former member of the Executive Council of the International Test Commission (ITC) and former Editor of the ITC’s publication Testing International.
(4) Dr John Fremer and Dr David Foster
Caveon Test Security, USA
Challenges Confronted and Lessons Learned: Protecting Test Content and Personal Information from Test Security Threats in International Testing Programs
As we move through the first quarter of the 21st century, challenges to the security of our testing programs have been intensifying. Growing numbers of thieves are stealing and selling test questions, miniaturization technology is supporting undetectable recordings of testing sessions, and cheating technology is being sold on the internet. The ongoing validity of our test scores is being seriously threatened. In response, the last decade has seen significant advances in data forensic science, web monitoring models, secure item designs, and test delivery safeguards. While these measures have improved test security in many assessment programs, the threat remains clear, present, and dangerous. The presenters will highlight test security responses to the growing security challenges, in the areas of protection, deterrence, detection, and follow-up actions. Specific and actionable ideas will be provided for test program planners and managers to enhance security in every aspect of a high-stakes assessment program. Major lessons learned from dealing with security challenges in international testing programs will be shared, offering ideas that have the stood the test of time. Many testing programs are looking for new solutions. Often these solutions move beyond the century-old reliance on static multiple-choice items, traditional proctoring methods, and conventional test administration models. The presentation will look at research underway and new technologies being developed to help programs transition from traditional approaches to more secure testing environments for their programs and their examinees. Future directions for test security will be considered, including the promise of cheat resistant item design and delivery. Test security threats will continue to evolve. We will all have to keep learning and evolving if we are to protect our testing programs and the services they provide.
John Fremer is a Founder of Caveon Test Security, a company that helps improve security in test development, test administration, reporting, and score use. He serves as President, Caveon Consulting Services. John has 45+ years of testing experience, including management positions at ETS and Pearson. John is a Past President of the Association of Test Publishers (ATP) as well as the National Council on Measurement in Education (NCME) and the Association for Assessment in Counseling (AAC). John received the 2007 ATP Award for Contributions to Measurement. He served as editor for the NCME journal Educational Measurement: Issues and Practice. He is co-Editor with Jim Wollack of the Handbook of Test Security (2011). John presents frequently at national and international testing conferences. John has a B.A. from Brooklyn College, CUNY, where he graduated Phi Beta Kappa and Magna Cum Laude, and a Ph.D. from Teachers College, Columbia University.
From 1990-1997, David directed certification test development at Novell, where he introduced CAT on a worldwide scale, pioneered new item types, and launched simulation-based testing. He co-founded Galton Technologies to provide test development technology and services to certification programs. In 2003 David co-founded the industry’s first test security company, Caveon. Under his guidance, Caveon has created new security tools, analyses, and services to protect exams., David has focused on changing the design of items and tests to reduce the harmful effects of cheating and testwiseness. He has served on numerous boards and committees. He also founded the Performance Testing Council. He has authored numerous articles for industry publications and journals and has presented extensively at conferences. David graduated from Brigham Young University in 1977 with a Ph.D. in Experimental Psychology. He completed a post-doctoral fellowship at Florida State University in 1981.
(5) Prof Dragoș Iliescu
University of Bucharest, Romania
Current Research and Practice on Fairness and Discrimination in Personnel Assessment
Dragoș Iliescu is a Professor of Psychology with the University of Bucharest. He has been active as a consultant for the past more than 20 years, being involved in and having led important projects related to tests, testing and assessment (among them more than 100 test adaptation projects), mainly in South-Eastern Europe, but also in South-East Asia, Africa, the Middle East and South America. Dragoș Iliescu has served in various capacities for a number of national and international professional associations; among others, he is the immediate Past-President (2016-2018) of the International Test Commission (ITC), an Executive Committee member and Treasurer of the European Association for Work and Organizational Psychology (EAWOP), and the President-Elect of the Division 2 (Psychological Assessment and Evaluation) of the International Association of Applied Psychology (IAAP). He is the Editor for the European Journal of Psychological Assessment, and the author of over 100 scientific papers, book chapters and books, among them (as co-Editor) the acclaimed ITC International Handbook of Testing and Assessment, published in 2016 by Oxford University Press, and an important monography on test adaptation (Adapting tests in linguistic and cultural situations) published by Cambridge University Press. His research interests group around two domains: (1) psychometrics: psychological and educational assessment, tests and testing (with an important cross-cultural component), and (2) work, industrial and organizational psychology (with an important focus on measurement in selection and occupational health).
Dragoș Iliescu is a Professor of Psychology with the University of Bucharest. His research interests group around two domains. First, he is interested in applications of psychometrics: psychological and educational assessment, tests and testing; he is especially involved in comparative international educational assessments and takes a keen interest in the cross-cultural components of assessment. Second, he continues research in applied psychology, especially work, industrial and organizational psychology, with an important focus on measurement in selection and occupational health.
(6) Prof Lianzhen He
Zhejiang University, China
China’s Standards of English Language Ability: Impetus for Change in Language Learning, Teaching and Assessment
To support China’s development in the new era and to cultivate high-calibre personnel, important changes have taken place across the education system in China over the past few years. In the field of foreign language education, great attention is given to the introduction of formative assessment and a shift away from teaching to the test to the development of core communicative competencies. In 2014, the State Council of China issued the Implementation Opinions on Deepening the Reform of Examinations and Enrolment, calling for the development of a new assessment system of foreign languages. An important part of the proposed system, is the development of China’s Standards of English Language Ability (CSE). The CSE, specifically related to the Chinese EFL context, is expected to provide a set of transparent and consistent standards of English language proficiency to enhance the communication between English teaching, learning and assessment. Studies in relation to the CSE are being conducted, linking newly, locally designed tests, as well as well-established international English language tests such as IELTS and TOEFL to the framework. In this talk, a brief introduction of the background, the theoretical framework of the CSE, and the development process will be given with a special focus on the CSE’s relation to established, international English language tests. The potential impact of the CSE on the Chinese education system as a whole might serve as an instructive example for other national language learning and assessment programs.
Prof. He’s main research interests are language testing and English language teaching. She got her Master’s degree from the University of Birmingham (1993) and her PhD degree in linguistics and applied linguistics from Guangdong Foreign Studies University, China (1998). She was a senior visiting scholar at University of California at Los Angeles in 2004 and was local chair of the 2008 Language Testing Research Colloquium (LTRC) held in Hangzhou. She was the Benjamin Meaker Visiting Professor at University of Bristol in 2014. She has also been a key-note speaker at several international conferences. She has directed more than 10 major research projects on language testing and language teaching. She is also Chair of the Advisory Board of Foreign Language Teaching and Learning in Higher Education Institutions in China and National Professor of Distinction. She has been on the editorial board of a number of journals including Language Assessment Quarterly and has been a member of TOEFL COE since 2015. She has published widely in applied linguistics and language testing, including over 30 English textbooks which are used nationwide in Chinese universities, 3 monographs a number of journal articles on language assessment, discourse analysis and language teaching.
(7) Prof Kurt Geisinger
Buros Center for Testing, ITC President 2018-2020: ITC Presidential Address
Testing and Assessment in Higher Education: Uses, Misuses, and Ways Forward
Testing and assessment in the academic side of higher education have two primary foci: admissions decisions and outcomes assessment. Both of these contribute to the ways that higher education attempts to improve itself. Both provide certain advantages to informed users; both have possible downsides that cause problems. The primary goal of this presentation is to inform all members of the International Test Commission, all of whom come from higher education, and many of whom continue to work in higher education to the proper uses and interpretations of such tests and assessments. While many countries employ admissions testing, others do not. Yet all colleges and universities seek to enroll students who will be successful. It is not clear how many countries require outcomes assessment, and the reasons for the use of such measures and how they can best be employed will be described. In addition, alternatives to testing and assessment will be discussed so that higher education can move forward based on scientifically based knowledge.
Kurt F. Geisinger is Director of the Buros Center on Testing and W. C. Meierhenry Distinguished University Professor at the University of Nebraska-Lincoln. He is 2018-2020 President of the International Test Commission, 2019-2020 President of the Quantitative and Qualitative Methodology division of the American Psychological Association (APA), president-elect of APA’s International Psychology division, and vice president of the Assessment and Evaluation division of the International Association of Applied Psychology, as well as the association’s treasurer. He has edited/co-edited two editions of Psychological Testing of Hispanics (2 editions), Test Interpretation and Diversity, High Stakes Testing in Education, and the Handbook of Testing and Assessment in Psychology (3 volumes), all with APA books and the ITC International Handbook of Testing and Assessment, the 17th, 18th, 19th, 20th and 21st Mental Measurements Yearbooks, PruebasPublicadas en Espanol, and Tests in Print VIII and Test in Print IX. He served APA Division 5 two terms as its Council of Representatives. He was elected to the APA Board of Directors (2011-2013) and has served on dozens of APA task forces and committees, including the Committee on Psychological Testing and Assessment, the Advisory Group on Applied Psychology, and the Committee on International Relations in Psychology. He is a fellow of six APA divisions and of the International Association of Applied Psychology. He has published over 160 papers and has served as an expert witness in over 35 court cases, working for both plaintiffs and defendants. He built the New York City Police and Fire civil service examinations for approximately a decade, including performing job analyses, and has extensive experience working with licensing examinations. He has previously worked as a psychology department chair, a dean of arts and sciences, and an academic vice president/provost.
(8) Prof Dave Bartram
University of Kent, England
Tom Oakland Award Keynote presentation: The importance of the ITC in global testing
My working life has been pursued down two interlocking pathways: an academic research and development path which developed from an initial interest in cognition into the development of computer-based information-processing and psychomotor testing in the 1980s. This culminated in my R&D work for SHL from 1998 up to my retirement. I was privileged to work with a team that developed the Universal Competency Framework and the use of multi-dimensional IRT in personality assessment. The second pathway was a focus on standards based on a belief that psychological testing was the most powerful tool developed by psychology; that it’s proper use was of great potential benefit and that its misuse was potentially dangerous. To my mind, the key element in realising the potential afforded by the use of psychological measurement lay in the competence of the user. That belief lay behind my work for the BPS, EFPA and the ITC in the development of guidelines, standards and competence-based user qualifications. I’m delighted to say that we are now working with the ITC to provide access to the materials people need to help them become competent through the medium of online learning. With Pat Lindley and Dragos Iliescu, I have been working on the design and implementation of the ITC Learning Centre, where ITC members will be able to access materials covering all areas of practice in tests and testing. This talk will describe the background to this development and what it could provide as a future role for the ITC given the global reach of this organization.
Dr Dave Bartram was Chief Psychologist for CEB’s Talent Management Labs until 2016. Before SHL’s acquisition by CEB he was Research Director for SHL and prior to that Dean of the Faculty of Science and the Environment and Professor of Psychology in the Department of Psychology at the University of Hull, UK. He has been awarded Fellowships by the British Psychological Society (BPS), the Ergonomics Society, the International Test commission (ITC), the Academy of Social Sciences, the Society for Industrial and Organizational Psychology (SIOP) and the International Association of Applied Psychology (IAAP). He is Extraordinary Professor in the Department of Human Resource Management at the University of Pretoria, South Africa and an Honorary Professor at the University of Kent. In 2004 he received the BPS Award for Distinguished Contribution to Professional Psychology and in 2015 the BPS Division of Occupational Psychology’s Lifetime Achievement Award. In 2017 he received the Robert Roe Award for contributions to society from EFPA. He has published widely in the area of psychological testing both scientific research relating to computer-based testing and competency assessment and in relation to professional issues, especially occupational standards and testing standards. He is a Registered Occupational Psychologist and Chartered member of the BPS. He now operates as an independent consultant.