Responding to test items and tasks is a complex human behaviour; response processes involve an interaction among the test taker, test items or tasks, responses or response options, and the testing context (Hubley, 2017; Launeanu & Hubley, 2017). As one of the five sources of validity evidence in the Standards for Educational and Psychological Testing (AERA, APA, & NCME, 1999, 2014), response processes evidence tends to be poorly understood by researchers and under-utilized relative to other sources such as internal structure and relations with other variables (Zumbo & Hubley, 2017). It is critically important in test development and validation to determine the degree to which the test developer, test user, and test takers interpret the meaning of items or tasks in the same way if the scores are to be meaningful and useful (Hubley, 2017; Launeanu & Hubley, 2017). In recent years, there has been an influx of research incorporating response processes evidence. I present a definition for response processes, describe different forms of response processes evidence, discuss and evaluate its use in the validation of social science measures based on a recent scoping review, and provide suggestions for how response processes information can be used in test development, revision, and validation.
Dr. Anita Hubley is a Full Professor and Killam Laureate in the Department of Educational and Counselling Psychology and Special Education at the University of British Columbia (UBC), where she is Coordinator and member of the Measurement, Evaluation, and Research Methodology program, member of the Counselling Psychology program, and Director of the Adult Development and Psychometrics Lab. She received her Ph.D. in Psychology in 1995 with a specialization in Human Assessment. Dr. Hubley is recognized internationally for her expertise in test development, validity, and psychological and health assessment and measurement across the adult lifespan, including with vulnerable populations. She has published over 100 academic articles and book chapters on various topics, including reliability, validity, and the development and validation of measurement instruments. She has also developed several clinical, health, and psychological tests. She has been a principal or co-investigator on numerous grants involving the development or psychometric evaluation of tests, given 110+ presentations or invited addresses at conferences, and has given two workshops for the International Test Commission on evaluating reliability and validity studies. She is an Associate Editor for the new Springer journal Measurement Instruments for the Social Sciences, a section editor and editorial board member for the Encyclopedia of Quality of Life Research, and on the editorial boards of Journal of Psychoeducational Assessment, Social Indicators Research, and the Canadian Journal of School Psychology. She is a former member of the Executive Council of the International Test Commission (ITC) and former Editor of the ITC’s publication Testing International.
Personalized learning, despite its perhaps over-use as a term, is a topic of great interest in a number of educational, fields, from learning sciences to assessment. Often, the term is used to describe how some product allows for near-instant results that help understand what students should be studying. Often lost in the pursuit of personalized learning systems is the role of the teacher. In this presentation, I describe efforts to build a formative assessment system to empower teachers by providing up-to-the-minute information about what their students know. The system implements versions of recently developed diagnostic psychometric models that, when paired with small, regularly administered progress assessments, can provide an accurate and up-to-date snapshot of students’ knowledge states. Further, this system could be enhanced by a prediction system enabling multiple measures to add to student estimates. The system seeks to provide richer, more detailed student feedback to be given to a teacher in order to help decide upon what would be best for each student’s educational progress.
Jonathan Templin is Professor and E. F. Lindquist Chair in the Department of Psychological and Quantitative Foundations at the University of Iowa. Dr. Templin received his Ph.D. in Quantitative Psychology at the University of Illinois at Urbana-Champaign in 2004, where he also received an M.S. in Statistics in 2002. He joined the faculty of the University of Iowa in January 2019, after stints on the faculty at the University of Kansas, the University of Nebraska-Lincoln, and the University of Georgia. The main focus of Dr Templin's research is in the field of diagnostic classification models—psychometric models that seek to provide multiple actionable and reliable scores from educational and psychological assessments. He also studies Bayesian statistics, as applied in psychometrics, broadly. Dr Templin’s research program has been funded by the United States National Science Foundation and Institute of Education Sciences and has been published in journals such as Psychometrika, Psychological Methods, Applied Psychological Measurement, and the Journal of Educational Measurement. In 2014, he was elected as a member of the Society of Multivariate Behavioral Research. Dr Templin is currently an outgoing co-editor of the Journal of Educational Measurement and an outgoing Associate Editor for Psychometrika. He is co-author of the 2010 book Diagnostic Measurement: Theory, Methods, and Applications, which won the 2012 American Educational Research Association Division D Award for Significant Contribution to Educational Measurement and Research Methodology. He is the winner of the 2015 AERA Cognition and Assessment SIG Award for Outstanding Contribution to Research in Cognition and Assessment and the inaugural 2017 Robert Linn Lecture Award.
The novel field of network psychometrics focuses on the estimation of network models aiming to capture interactions between observed variables. In this presentation, I will introduce this field and its main recent advances, and I will discuss future directions and challenges the field has yet to face. First, I will discuss the estimation of network from datasets ranging from data with independent cases (e.g., cross-sectional data) to datasets of multiple time-series. Second, I will discuss the formalization of network models as formal psychometric models, which allows for their combination with the general frameworks of structural equation modeling and item-response theory. I will discuss model equivalences between network and factor models and generalizations of network models that encompass latent variable structures. Finally, I will discuss future directions in network psychometrics such as the handling of missing data, ordinal data, network-based adaptive assessment and forming network models using theoretical knowledge.
Sacha Epskamp is an assistant professor at the University of Amsterdam, department of Psychological Methods, and research fellow at the Institute for Advanced studies of the University of Amsterdam. In 2017, Sacha Epskamp completed is PhD on network psychometrics – estimating network models from psychological datasets and equating these to established psychometric modeling techniques. He has implemented these methods in several software packages now routinely used in diverse fields of psychological research. Sacha Epskamp teaches on multivariate statistics and data science, and his research interests involve reproducibility, complexity, time-series modeling, and dynamical systems modeling. In addition to the psychometric society dissertation prize, Sacha Epskamp has received several rewards for his research, including the Leamer-Rosenthal Price for Open Science (2016).
to be added, stay tuned
Aletta Odendaal, Ph.D. is an Associate Professor and Head of the Department of Industrial Psychology at Stellenbosch University, South Africa. She is a licensed Industrial Psychologist and Master Human Resource Professional with more than 20 years’ experience in applied psychological assessment, strategic leadership development and executive coaching. Her passion and commitment towards improving conditions governing test use and development in multicultural context as well as setting standards of practice in developing countries is reflected in her national and international leadership and involvement in different professional societies and regulatory bodies. She is a fellow and past president of the Society for Industrial and Organisational Psychology of South Africa (SIOPSA) and currently President-elect of the International Test Commission.
to be added, stay tuned
Prof. He's main research interests are language testing and English language teaching. She got her Master’s degree from the University of Birmingham (1993) and her PhD degree in linguistics and applied linguistics from Guangdong Foreign Studies University, China (1998). She was a senior visiting scholar at University of California at Los Angeles in 2004 and was local chair of the 2008 Language Testing Research Colloquium (LTRC) held in Hangzhou. She was the Benjamin Meaker Visiting Professor at University of Bristol in 2014. She has also been a key-note speaker at several international conferences. She has directed more than 10 major research projects on language testing and language teaching. She is also Chair of the Advisory Board of Foreign Language Teaching and Learning in Higher Education Institutions in China and National Professor of Distinction. She has been on the editorial board of a number of journals including Language Assessment Quarterly and has been a member of TOEFL COE since 2015. She has published widely in applied linguistics and language testing, including over 30 English textbooks which are used nationwide in Chinese universities, 3 monographs a number of journal articles on language assessment, discourse analysis and language teaching.
As we move through the first quarter of the 21st century, challenges to the security of our testing programs have been intensifying. Growing numbers of thieves are stealing and selling test questions, miniaturization technology is supporting undetectable recordings of testing sessions, and cheating technology is being sold on the internet. The ongoing validity of our test scores is being seriously threatened. In response, the last decade has seen significant advances in data forensic science, web monitoring models, secure item designs, and test delivery safeguards. While these measures have improved test security in many assessment programs, the threat remains clear, present, and dangerous. The presenters will highlight test security responses to the growing security challenges, in the areas of protection, deterrence, detection, and follow-up actions. Specific and actionable ideas will be provided for test program planners and managers to enhance security in every aspect of a high-stakes assessment program. Major lessons learned from dealing with security challenges in international testing programs will be shared, offering ideas that have the stood the test of time. Many testing programs are looking for new solutions. Often these solutions move beyond the century-old reliance on static multiple-choice items, traditional proctoring methods, and conventional test administration models. The presentation will look at research underway and new technologies being developed to FremerJandFosterDp programs transition from traditional approaches to more secure testing environments for their programs and their examinees. Future directions for test security will be considered, including the promise of cheat resistant item design and delivery. Test security threats will continue to evolve. We will all have to keep learning and evolving if we are to protect our testing programs and the services they provide.
John Fremer is a Founder of Caveon Test Security, a company that helps improve security in test development, test administration, reporting, and score use. He serves as President, Caveon Consulting Services. John has 45+ years of testing experience, including management positions at ETS and Pearson. John is a Past President of the Association of Test Publishers (ATP) as well as the National Council on Measurement in Education (NCME) and the Association for Assessment in Counseling (AAC). John received the 2007 ATP Award for Contributions to Measurement. He served as editor for the NCME journal Educational Measurement: Issues and Practice. He is co-Editor with Jim Wollack of the Handbook of Test Security (2011). John presents frequently at national and international testing conferences. John has a B.A. from Brooklyn College, CUNY, where he graduated Phi Beta Kappa and Magna Cum Laude, and a Ph.D. from Teachers College, Columbia University.
David Foster's summary is to be added, stay tuned!
Twenty-five years ago, nations of the world affirmed their commitment to enhancing inclusion through the Salamanca declaration. There have been subsequent commitments, most recently articulated globally in the sustainable development goals. While many gains have been made, it is vital to reflect to what extent measurements of learning, such as national assessments, have expanded their boundaries to be more inclusive. Learning assessment have played a critical role in providing data to illuminate that schooling does not equate to learning. Indeed, national assessments provide evidence that informs if the system works for all children. Assessments can identify problem areas in children’s learning trajectories as well as patterns with respect to specific subpopulations that may be struggling more than others. However, for many countries in the global south, the design of traditional large scale learning assessments – whether national examinations or regional/international standardized tests – subverts these objectives from the very beginning. First, because they are designed as pen-paper assessments, they assume that the children taking these tests have the foundational skills necessary to enable them respond adequately to test items. In reality, very large proportions of children in the global south may not have writing skills implying the need to relook the format of testing. Second, many measurements are conducted using samples of registered schools. In reality children from disadvantaged households often attend unregistered schools, or may not be in school implying the site where tests are conducted need scrutiny. In addition, standardized testing is an exclusive exercise often privileged for the school community. The important process of understanding what “learning” looks like and how to measure it is not communicated to important actors in children’s lives – family and community members, many of whom have perhaps not themselves been to school. Thirdly, although test items are often designed to generate a deep understanding about children’s learning, only a very small number of highly trained individuals in any given context are able to understand and interpret testing data, thus limiting its usefulness as a tool for catalysing action to a handful of people.
To summarize, most standardized testing ignores the realities not only of the children in the global south, but equally the realities of the adults within and outside the school system who are in a position to use testing data to inform action. The citizen led assessment approach (CLA), implemented by the 14 member countries of the People’s Action for Learning (PAL) network, is designed to address these realities. The presentation will delve into why it is important to re-examine our research designs so that data derived, and indeed processes used are more inclusive. The paper will conclude with a presentation of how PAL Network is collecting comparable data that can be used to measure progress towards the acquisition of the sustainable development goals 4.1.1 on literacy and numeracy for all children, whether in school or not. The presentation will posit the citizen led assessment approach as a complementary approach that can provide more comprehensive data that advance global education goals to ensure that no child is left behind.
Sara Ruto is the Director of the PAL network. The PAL network currently comprises civil society organizations that are conducting citizen led assessments in 14 countries in Africa, Asia and Latin America. The focus of the assessments are reading and numeracy. In addition, she manages an organisation known as ziziAfrique that focusses on evidence based intervention with a purpose of informing the quality of educational provision. Prior to serving in this position, Sara initiated the citizen led process in Kenya in 2009 that currently operates as Uwezo and thereafter managed the Uwezo East Africa learning assessment. She sits in several committees, such as Global Education Monitoring Report, the World Bank’s SABER Technical Advisory Board and INCLUDE Knowledge Platform. Her current role as Chair of the Kenya Institute of Curriculum Development provide an opportunity to participate actively in the current education reform process in Kenya. She trained as a teacher in Kenyatta University in Kenya, and obtained her doctorate degree from Heidelberg University in Germany.
Advances in methods for the non-invasive study of brain activity, such as EEG and fMRI, have over recent decades promised much for the clinical health sciences and by (considerable) extrapolation for other fields such as education, economics, and marketing. They have as well engendered ‘neuromyths’ and debate between the so-called ‘neuro-entrepreneurs’ and the ‘neuro-skeptics’. Against this background, the presentation considers possible uses of these methods in organisations for selection and assessment, targeting constructs such as job performance, organisational citizenship, and leadership. Technical feasibility and cost-effectiveness presently rule out their use, but the rate of technological advance is such that these may not be considerations within a short time. What is currently known about task-dependent and task-independent measures of brain activity (neuromarkers) is considered in terms of standard criteria for test development and use, including reliability, validity, predictive efficiency, freedom from motivational distortion, and privacy. The issue of validity is seen as the central concern in attempts to use neuromarkers in place of, or in addition to, traditional assessment methods, given the conceptual gap between these and behaviour and mentation in workplace settings.
On completing his PhD at Queensland University in 1970, John served in the Australian Army Psychology Corps during the Vietnam War and subsequently as a consultant in selection and assessment to the Corps and to the Office of the Public Service Board, Canberra. At the University of New England, Armidale, New South Wales he was, for various periods, head of the Department of Psychology and Dean of the Faculty of Arts, before being appointed in 1989 Foundation Professor of Psychology and Head of School at Griffith University in Queensland. For 10 years at Griffith he was Dean of the Faculty of Health Sciences (formerly Health and Behavioural Science), and during that period was Chair of the Psychologists Board of Queensland before the era of national registration. He joined Australian Catholic University as a Pro-Vice-Chancellor in 2004 with special responsibilities for quality management and for community engagement, including Indigenous education and the community education program for disadvantaged Australians. He returned to Griffith University in 2009 and was, for various periods until his retirement in 2017, Acting Director of the Australian Institute for Suicide Research and Prevention. His contributions to the work of the Australian Psychological Society included chairing the College of Organisational Psychologists, chairing the Society’s Course Development and Accreditation Committee, and serving as a member of Council. He was awarded the President’s Award for Lifetime Contribution to Psychology in 2013. He is a former Editor of Australian Journal of Psychology, a former Associate Editor of Biological Psychology, author of Psychology as a Profession in Australia published by Australian Academic Press, a co-author (with Shum, Myors, and Creed) of Psychological Testing and Assessment published by Oxford University Press, and co-editor (with Boyle and Fogarty) of the 5-volume set Work and Organisational Psychology published by Sage. John is a registered psychologist, endorsed in organisational psychology, a Fellow of the Australian Psychological Society and a Professor Emeritus of Australian Catholic University and Griffith University.