(1) Prof Sacha Epskamp
University of Amsterdam, Netherlands
Network Psychometrics: Current State and Future Directions
The novel field of network psychometrics focuses on the estimation of network models aiming to capture interactions between observed variables. In this presentation, I will introduce this field and its main recent advances, and I will discuss future directions and challenges the field has yet to face. First, I will discuss the estimation of network from datasets ranging from data with independent cases (e.g., cross-sectional data) to datasets of multiple time-series. Second, I will discuss the formalization of network models as formal psychometric models, which allows for their combination with the general frameworks of structural equation modeling and item-response theory. I will discuss model equivalences between network and factor models and generalizations of network models that encompass latent variable structures. Finally, I will discuss future directions in network psychometrics such as the handling of missing data, ordinal data, network-based adaptive assessment and forming network models using theoretical knowledge.
Sacha Epskamp is an assistant professor at the University of Amsterdam, department of Psychological Methods, and research fellow at the Institute for Advanced studies of the University of Amsterdam. In 2017, Sacha Epskamp completed is PhD on network psychometrics – estimating network models from psychological datasets and equating these to established psychometric modeling techniques. He has implemented these methods in several software packages now routinely used in diverse fields of psychological research. Sacha Epskamp teaches on multivariate statistics and data science, and his research interests involve reproducibility, complexity, time-series modeling, and dynamical systems modeling. In addition to the psychometric society dissertation prize, Sacha Epskamp has received several rewards for his research, including the Leamer-Rosenthal Price for Open Science (2016).
(2) Prof Jonathan Templin
University of Iowa, USA
Building a Diagnostic Model-Based Formative Assessment System for Personalizing Educational Progress
Personalized learning, despite its perhaps over-use as a term, is a topic of great interest in a number of educational, fields, from learning sciences to assessment. Often, the term is used to describe how some product allows for near-instant results that help understand what students should be studying. Often lost in the pursuit of personalized learning systems is the role of the teacher. In this presentation, I describe efforts to build a formative assessment system to empower teachers by providing up-to-the-minute information about what their students know. The system implements versions of recently developed diagnostic psychometric models that, when paired with small, regularly administered progress assessments, can provide an accurate and up-to-date snapshot of students’ knowledge states. Further, this system could be enhanced by a prediction system enabling multiple measures to add to student estimates. The system seeks to provide richer, more detailed student feedback to be given to a teacher in order to help decide upon what would be best for each student’s educational progress.
Jonathan Templin is Professor and E. F. Lindquist Chair in the Department of Psychological and Quantitative Foundations at the University of Iowa. Dr. Templin received his Ph.D. in Quantitative Psychology at the University of Illinois at Urbana-Champaign in 2004, where he also received an M.S. in Statistics in 2002. He joined the faculty of the University of Iowa in January 2019, after stints on the faculty at the University of Kansas, the University of Nebraska-Lincoln, and the University of Georgia. The main focus of Dr Templin’s research is in the field of diagnostic classification models—psychometric models that seek to provide multiple actionable and reliable scores from educational and psychological assessments. He also studies Bayesian statistics, as applied in psychometrics, broadly. Dr Templin’s research program has been funded by the United States National Science Foundation and Institute of Education Sciences and has been published in journals such as Psychometrika, Psychological Methods, Applied Psychological Measurement, and the Journal of Educational Measurement. In 2014, he was elected as a member of the Society of Multivariate Behavioral Research. Dr Templin is currently an outgoing co-editor of the Journal of Educational Measurement and an outgoing Associate Editor for Psychometrika. He is co-author of the 2010 book Diagnostic Measurement: Theory, Methods, and Applications, which won the 2012 American Educational Research Association Division D Award for Significant Contribution to Educational Measurement and Research Methodology. He is the winner of the 2015 AERA Cognition and Assessment SIG Award for Outstanding Contribution to Research in Cognition and Assessment and the inaugural 2017 Robert Linn Lecture Award.
(3) Prof Anita Hubley
University of British Columbia, Canada
Contributions of Response Processes to Test Validation and Development
Responding to test items and tasks is a complex human behaviour; response processes involve an interaction among the test taker, test items or tasks, responses or response options, and the testing context (Hubley, 2017; Hubley, 2021; Launeanu & Hubley, 2017). As one of the five sources of validity evidence in the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 1999, 2014), response processes evidence tends to be poorly understood by researchers and under-utilized relative to other sources such as internal structure and relations with other variables (Zumbo & Hubley, 2017). It is critically important in test development and validation work to determine the degree to which the test developer, test user, and test takers interpret the meaning of items or tasks in the same way, use relevant information or appropriate strategies in arriving at their response, and respond in a meaningful way, if their scores are to be meaningful and useful (Hubley, 2021; Launeanu & Hubley, 2017). In recent years, there has been an influx of research incorporating response processes evidence. In this talk, I will define response processes, describe some different forms of response processes evidence, discuss and evaluate its use in the validation of social science measures based on a recent scoping review as well as my recent and current work examining response processes related to positively and negatively worded and keyed items, and provide suggestions for how response processes information can be used in test development, revision, and validation.
Dr. Anita Hubley is a Full Professor and Killam Laureate in the Department of Educational and Counselling Psychology and Special Education at the University of British Columbia (UBC), where she is Coordinator and member of the Measurement, Evaluation, and Research Methodology program, member of the Counselling Psychology program, and Director of the Adult Development and Psychometrics Lab. She received her Ph.D. in Psychology in 1995 with a specialization in Human Assessment. Dr. Hubley is recognized internationally for her expertise in test development, validity, and psychological and health assessment and measurement across the adult lifespan, including with vulnerable populations. She has published over 100 academic articles and book chapters on various topics, including reliability, validity, and the development and validation of measurement instruments. She has also developed several clinical, health, and psychological tests. She has been a principal or co-investigator on numerous grants involving the development or psychometric evaluation of tests, given 110+ presentations or invited addresses at conferences, and has given two workshops for the International Test Commission on evaluating reliability and validity studies. She is an Associate Editor for the new Springer journal Measurement Instruments for the Social Sciences, a section editor and editorial board member for the Encyclopedia of Quality of Life Research, and on the editorial boards of Journal of Psychoeducational Assessment, Social Indicators Research, and the Canadian Journal of School Psychology. She is a former member of the Executive Council of the International Test Commission (ITC) and former Editor of the ITC’s publication Testing International.
(4) Dr John Fremer and Dr David Foster
Caveon Test Security, USA
Challenges Confronted and Lessons Learned: Protecting Test Content and Personal Information from Test Security Threats in International Testing Programs
As we move through the first quarter of the 21st century, challenges to the security of our testing programs have been intensifying. Growing numbers of thieves are stealing and selling test questions, miniaturization technology is supporting undetectable recordings of testing sessions, and cheating technology is being sold on the internet. The ongoing validity of our test scores is being seriously threatened. In response, the last decade has seen significant advances in data forensic science, web monitoring models, secure item designs, and test delivery safeguards. While these measures have improved test security in many assessment programs, the threat remains clear, present, and dangerous. The presenters will highlight test security responses to the growing security challenges, in the areas of protection, deterrence, detection, and follow-up actions. Specific and actionable ideas will be provided for test program planners and managers to enhance security in every aspect of a high-stakes assessment program. Major lessons learned from dealing with security challenges in international testing programs will be shared, offering ideas that have the stood the test of time. Many testing programs are looking for new solutions. Often these solutions move beyond the century-old reliance on static multiple-choice items, traditional proctoring methods, and conventional test administration models. The presentation will look at research underway and new technologies being developed to help programs transition from traditional approaches to more secure testing environments for their programs and their examinees. Future directions for test security will be considered, including the promise of cheat resistant item design and delivery. Test security threats will continue to evolve. We will all have to keep learning and evolving if we are to protect our testing programs and the services they provide.
John Fremer is a Founder of Caveon Test Security, a company that helps improve security in test development, test administration, reporting, and score use. He serves as President, Caveon Consulting Services. John has 45+ years of testing experience, including management positions at ETS and Pearson. John is a Past President of the Association of Test Publishers (ATP) as well as the National Council on Measurement in Education (NCME) and the Association for Assessment in Counseling (AAC). John received the 2007 ATP Award for Contributions to Measurement. He served as editor for the NCME journal Educational Measurement: Issues and Practice. He is co-Editor with Jim Wollack of the Handbook of Test Security (2011). John presents frequently at national and international testing conferences. John has a B.A. from Brooklyn College, CUNY, where he graduated Phi Beta Kappa and Magna Cum Laude, and a Ph.D. from Teachers College, Columbia University.
From 1990-1997, David directed certification test development at Novell, where he introduced CAT on a worldwide scale, pioneered new item types, and launched simulation-based testing. He co-founded Galton Technologies to provide test development technology and services to certification programs. In 2003 David co-founded the industry’s first test security company, Caveon. Under his guidance, Caveon has created new security tools, analyses, and services to protect exams., David has focused on changing the design of items and tests to reduce the harmful effects of cheating and testwiseness. He has served on numerous boards and committees. He also founded the Performance Testing Council. He has authored numerous articles for industry publications and journals and has presented extensively at conferences. David graduated from Brigham Young University in 1977 with a Ph.D. in Experimental Psychology. He completed a post-doctoral fellowship at Florida State University in 1981.
(5) Dr Sara Ruto
PAL Network, Kenya
Measuring Learning for All Children: The Citizen Led Assessment Approach
Twenty-five years ago, nations of the world affirmed their commitment to enhancing inclusion through the Salamanca declaration. There have been subsequent commitments, most recently articulated globally in the sustainable development goals. While many gains have been made, it is vital to reflect to what extent measurements of learning, such as national assessments, have expanded their boundaries to be more inclusive. Learning assessment have played a critical role in providing data to illuminate that schooling does not equate to learning. Indeed, national assessments provide evidence that informs if the system works for all children. Assessments can identify problem areas in children’s learning trajectories as well as patterns with respect to specific subpopulations that may be struggling more than others. However, for many countries in the global south, the design of traditional large scale learning assessments – whether national examinations or regional/international standardized tests – subverts these objectives from the very beginning. First, because they are designed as pen-paper assessments, they assume that the children taking these tests have the foundational skills necessary to enable them respond adequately to test items. In reality, very large proportions of children in the global south may not have writing skills implying the need to relook the format of testing. Second, many measurements are conducted using samples of registered schools. In reality children from disadvantaged households often attend unregistered schools, or may not be in school implying the site where tests are conducted need scrutiny. In addition, standardized testing is an exclusive exercise often privileged for the school community. The important process of understanding what “learning” looks like and how to measure it is not communicated to important actors in children’s lives – family and community members, many of whom have perhaps not themselves been to school. Thirdly, although test items are often designed to generate a deep understanding about children’s learning, only a very small number of highly trained individuals in any given context are able to understand and interpret testing data, thus limiting its usefulness as a tool for catalysing action to a handful of people. To summarize, most standardized testing ignores the realities not only of the children in the global south, but equally the realities of the adults within and outside the school system who are in a position to use testing data to inform action. The citizen led assessment approach (CLA), implemented by the 14 member countries of the People’s Action for Learning (PAL) network, is designed to address these realities. The presentation will delve into why it is important to re-examine our research designs so that data derived, and indeed processes used are more inclusive. The paper will conclude with a presentation of how PAL Network is collecting comparable data that can be used to measure progress towards the acquisition of the sustainable development goals 4.1.1 on literacy and numeracy for all children, whether in school or not. The presentation will posit the citizen led assessment approach as a complementary approach that can provide more comprehensive data that advance global education goals to ensure that no child is left behind.
Sara Ruto is the Director of the PAL network. The PAL network currently comprises civil society organizations that are conducting citizen led assessments in 14 countries in Africa, Asia and Latin America. The focus of the assessments are reading and numeracy. In addition, she manages an organisation known as ziziAfrique that focusses on evidence based intervention with a purpose of informing the quality of educational provision. Prior to serving in this position, Sara initiated the citizen led process in Kenya in 2009 that currently operates as Uwezo and thereafter managed the Uwezo East Africa learning assessment. She sits in several committees, such as Global Education Monitoring Report, the World Bank’s SABER Technical Advisory Board and INCLUDE Knowledge Platform. Her current role as Chair of the Kenya Institute of Curriculum Development provide an opportunity to participate actively in the current education reform process in Kenya. She trained as a teacher in Kenyatta University in Kenya, and obtained her doctorate degree from Heidelberg University in Germany.
(6) Prof Lianzhen He
Zhejiang University, China
China’s Standards of English Language Ability: Impetus for Change in Language Learning, Teaching and Assessment
To support China’s development in the new era and to cultivate high-calibre personnel, important changes have taken place across the education system in China over the past few years. In the field of foreign language education, great attention is given to the introduction of formative assessment and a shift away from teaching to the test to the development of core communicative competencies. In 2014, the State Council of China issued the Implementation Opinions on Deepening the Reform of Examinations and Enrolment, calling for the development of a new assessment system of foreign languages. An important part of the proposed system, is the development of China’s Standards of English Language Ability (CSE). The CSE, specifically related to the Chinese EFL context, is expected to provide a set of transparent and consistent standards of English language proficiency to enhance the communication between English teaching, learning and assessment. Studies in relation to the CSE are being conducted, linking newly, locally designed tests, as well as well-established international English language tests such as IELTS and TOEFL to the framework. In this talk, a brief introduction of the background, the theoretical framework of the CSE, and the development process will be given with a special focus on the CSE’s relation to established, international English language tests. The potential impact of the CSE on the Chinese education system as a whole might serve as an instructive example for other national language learning and assessment programs.
Prof. He’s main research interests are language testing and English language teaching. She got her Master’s degree from the University of Birmingham (1993) and her PhD degree in linguistics and applied linguistics from Guangdong Foreign Studies University, China (1998). She was a senior visiting scholar at University of California at Los Angeles in 2004 and was local chair of the 2008 Language Testing Research Colloquium (LTRC) held in Hangzhou. She was the Benjamin Meaker Visiting Professor at University of Bristol in 2014. She has also been a key-note speaker at several international conferences. She has directed more than 10 major research projects on language testing and language teaching. She is also Chair of the Advisory Board of Foreign Language Teaching and Learning in Higher Education Institutions in China and National Professor of Distinction. She has been on the editorial board of a number of journals including Language Assessment Quarterly and has been a member of TOEFL COE since 2015. She has published widely in applied linguistics and language testing, including over 30 English textbooks which are used nationwide in Chinese universities, 3 monographs a number of journal articles on language assessment, discourse analysis and language teaching.
(7) Prof Aletta Odendaal
Stellenbosch University, South Africa
Psychological Testing and Assessment in Developing Context: Shifting the Boundaries of Theory and Practice
Aletta Odendaal, Ph.D. is an Associate Professor and Head of the Department of Industrial Psychology at Stellenbosch University, South Africa. She is a licensed Industrial Psychologist and Master Human Resource Professional with more than 20 years’ experience in applied psychological assessment, strategic leadership development and executive coaching. Her passion and commitment towards improving conditions governing test use and development in multicultural context as well as setting standards of practice in developing countries is reflected in her national and international leadership and involvement in different professional societies and regulatory bodies. She is a fellow and past president of the Society for Industrial and Organisational Psychology of South Africa (SIOPSA) and currently President-elect of the International Test Commission.
(8) Prof Dave Bartram
University of Kent, England
Tom Oakland Award Keynote presentation: The importance of the ITC in global testing
My working life has been pursued down two interlocking pathways: an academic research and development path which developed from an initial interest in cognition into the development of computer-based information-processing and psychomotor testing in the 1980s. This culminated in my R&D work for SHL from 1998 up to my retirement. I was privileged to work with a team that developed the Universal Competency Framework and the use of multi-dimensional IRT in personality assessment. The second pathway was a focus on standards based on a belief that psychological testing was the most powerful tool developed by psychology; that it’s proper use was of great potential benefit and that its misuse was potentially dangerous. To my mind, the key element in realising the potential afforded by the use of psychological measurement lay in the competence of the user. That belief lay behind my work for the BPS, EFPA and the ITC in the development of guidelines, standards and competence-based user qualifications. I’m delighted to say that we are now working with the ITC to provide access to the materials people need to help them become competent through the medium of online learning. With Pat Lindley and Dragos Iliescu, I have been working on the design and implementation of the ITC Learning Centre, where ITC members will be able to access materials covering all areas of practice in tests and testing. This talk will describe the background to this development and what it could provide as a future role for the ITC given the global reach of this organization.
Dr Dave Bartram was Chief Psychologist for CEB’s Talent Management Labs until 2016. Before SHL’s acquisition by CEB he was Research Director for SHL and prior to that Dean of the Faculty of Science and the Environment and Professor of Psychology in the Department of Psychology at the University of Hull, UK. He has been awarded Fellowships by the British Psychological Society (BPS), the Ergonomics Society, the International Test commission (ITC), the Academy of Social Sciences, the Society for Industrial and Organizational Psychology (SIOP) and the International Association of Applied Psychology (IAAP). He is Extraordinary Professor in the Department of Human Resource Management at the University of Pretoria, South Africa and an Honorary Professor at the University of Kent. In 2004 he received the BPS Award for Distinguished Contribution to Professional Psychology and in 2015 the BPS Division of Occupational Psychology’s Lifetime Achievement Award. In 2017 he received the Robert Roe Award for contributions to society from EFPA. He has published widely in the area of psychological testing both scientific research relating to computer-based testing and competency assessment and in relation to professional issues, especially occupational standards and testing standards. He is a Registered Occupational Psychologist and Chartered member of the BPS. He now operates as an independent consultant.