Peas in a Pod Discussions


Peas in a Pod Discussions are informal, face-to-face conversations with fellow conference goers who share common interests. Pods do not include presentations. There are no projectors or slides. This is all about direct exchange and exploration of ideas.

Developing and Deploying Globally-Minded Testing Programs

With the globalization of businesses, tests developed in one country get translated, adapted, and applied for use in another country. During this Pea in a Pod Discussion, attendees will discuss their own experience in expanding their testing programs internationally while maintaining test security and integrity. Attendees will be encouraged to consider the social, political, legal, institutional, linguistic, and cultural differences between assessment settings that impact test security and integrity. Through group discussion, attendees will be invited to share their challenges and lessons learned in developing a deploying a security plan with considerations for topics such as methods of expansion, accessibility and accommodations, market entrance and exit, remote/online proctoring, translation challenges, and cultural differences in application.

Facilitator: Kimberly Nei, Hogan Assessment Systems

 

Emerging Technologies

The discussion moderator for this Peas in a Pod has been the co-PI on the assessment part of IARPA's Sirius Games project, through all of its various stages, since 2010. This project was a challenge involving more than twelve game developers tasked to develop serious games to mitigate cognitive biases in intelligence analysts. The moderators involvement involved creating three equated measures of cognitive biases, assessing the following: bias blind spot, fundamental attribution error, confirmation bias, representativeness, anchoring bias, and projection bias. Known as the Assessment of Biases in Cognition, the measure was used to evaluate the success of the game developers in mitigating the cognitive biases, compared to a more standard intervention, involving a video where experts provided declarative knowledge about each bias. The results were rather striking: The video intervention outperformed the games for virtually every construct.

This discussion will include sharing some of the modertor's observations from this project.

Facilitator: Richard Roberts, ProExam

 

Five Questions You Should Be Asking About Online Proctoring

This presentation outlines five major questions that help institutions sift through the weeds of marketing and sales material to validate one key question: Does this proctoring solution make exams more secure or more vulnerable?

Facilitators: Vincent Termini, ProctorU and Jarrod Morgan, ProctorU

 

Judicious Assessment of Students around the Globe

The educational assessment market is changing. Our mission is to provide instruction to all students, regardless of need. This creates a tremendously heterogeneous population to assess in a valid and reliable way. Assessment of elementary and secondary-aged students needs to be innovative, quick, inexpensive, and fair to all students while taking into account the needs of security and inclusion.

This discussion will introduce methods that are gaining ground in this area through the use of technology while comparing International elementary and secondary education experience to that of companies focused primarily on educational assessment in the U.S. Experts from various parts of the assessment industry will discuss their own experiences in which the competing needs of innovation and traditional assessment thinking collide.

This discussion will include a question and answer session focused on topics such as item/task security and reuse, adaptive assessments, the competing needs of assessing all versus appropriate administration, device comparability, governmental policy changes, and potential bias in digital learning solutions.

Facilitators: Saskia Wools, Cito and Jacqueline Krain, ACT

 

Making Sense of Gamification, Open Badges and Micro-Credentials

As new trends and technologies emerge, there is always a bit of a learning curve around the understanding and adoption, no matter the industry. One example of this within the credentialing space is with Gamification, Open Badges and Micro-Credentials. Are they all the same, are they different?

While these have all been referenced for several years now, many individuals still struggle with what each of them means. More importantly, there remains confusion around how they are similar and how they are different.

In this discussion, participants will receive an in-depth overview of each of these tools or strategies. We will discuss common definitions for each of the terms, and we will look at current applications inside and outside of the assessment and credentialing space. Lastly, this discussion will take a look at how all of these can be used together to further our industry.

Facilitator: Jarin Schmidt, Acclaim

 

Peas in a Pod for Health Sector Professionals

Come learn about the ATP Health Sector Special Interest Group (SIG) and meet your peers working in the healthcare assessment field. The goal of the SIG is to promote best practices and professional principles that guide the testing of healthcare workers and professionals, as well as patients. The SIG aims to provide an engaging community forum that facilitates collaboration, communication and innovations. In this session, the chair and vice-chair of the ATP Health Sector SIG will facilitate discussions regarding key issues, shared challenges and best practices among health sector education, licensure and credentialing professionals.

Facilitators: Ada Woo, National Council of State Boards of Nursing and David Waldschmidt, American Dental Association

 

Sharing The Mistakes That Assessment Professionals Make

Many entrepreneurs (e.g., Richard Branson) share the mistakes that they have made with the public because they fundamentally believe that those mistakes can become practical and problem-based learning opportunities for others. Unlike other professionals, assessment professionals rarely share their mistakes with the public. Because we firmly believe that the lessons that others can learn from our mistakes as well as the process of sharing them is vital to the lifelong learning of any professional, the discussion moderators will share the key mistakes that they have made over the course of their career (e.g., during the facilitation of a standard setting session, presenting results to key stakeholders and their resulting misinterpretations, failing to recognize a significant error in a certification process before launching, etc.), the lessons learned, and how those mistakes informed future decision making. The panel will create an environment where the audience feels comfortable sharing their own mistakes, lessons learned, and what they might have done differently with input and suggestions from other attendees. This discussion will include representation from all five ATP divisions.

Facilitators: Hakan Fritz , SLG Thomas International and Alina von Davier, ACT

 

Should Your Organization Consider a Stackable Credentialing Model?

During this discussion, the moderators will define a stackable credential, uses, and value. There are a few models, often linear, where the credential expands throughout a career and competency levels (entry level, mid-career, and seasoned). Therefore, as the credentials stack above one another, they are expected to have more cognitively demanding and difficult assessment content. After, the presenters will provide examples from the industry and their own organizations.

Facilitators: Manny Straehle, Assessment, Education, and Research Experts, LLC; Liberty Munson, Microsoft; and Christine Niero, Professional Testing Inc.

 

What’s Next in Educational Assessment?

Our society is changing rapidly. Think of emerging technology, more complex social interactions, less geopolitical stability and high demanding jobs. To make sure students are prepared to work and live in this changing world, education and educational assessment must adapt as well. What can we do to innovate our products? And are there examples available where we can learn from?

In this discussion, we will provide an overview of innovations in educational assessments aligned with several important trends. These trends are technical, pedagogical and psychometrical in nature. Trends that will be addressed are: formative assessment, adaptive learning and testing, soft skills or 21st century skills, new devices, and learning analytics. With this overview, we exemplify innovative assessments that aim to help students showing the best of themselves in assessments. The insights provided by these assessments can be used to enhance learning outcomes.

All trends will be illustrated with real-life examples and a reflection on future possibilities and challenges. Participants will be inspired to think of ways to incorporate technological advances for their measurement problems. Furthermore, they are urged to think outside the box to stay up to speed with the changing world. During this session, participants are invited to share their view on the observed trends and to reflect on how these trends and the instruments or methods that come along enhance educational effectiveness.

Facilitators: Saskia Wools, Cito and Mirna Pit, Bureau ICE

 

Visit our full programme
27 - 29 September 2017 Grand Hotel Huis ter Duin Noordwijk, the Netherlands
27 - 29 September 2017
Grand Hotel Huis ter Duin
Noordwijk, The Netherlands