NEW! Snapshot Sessions


Snapshot Sessions are thirty minute interactive breakout sessions. This new session type allows the presenters to get information to the audience in a quicker format than a full breakout session.

A Focus on the Candidate: Emerging Technology as a Driver of Candidate Experience

The global talent assessment market is experiencing some of its most fundamental changes since the advent of computer based delivery over 20 years ago. Emerging technologies are changing the focus for many organisations as they seek to roll out global talent solutions which appeal to a wide range of applicants, manage different technological expectations and deliver a candidate experience that resonates with both their employer and consumer brands.

This Snapshot session will outline the evolution of technology driven assessments and the increasing impact that they can deliver to the end to end candidate experience. The session will highlight a range of client case studies with Virgin Media, Formaposte, Emirates and Jaguar Landrover, and examine how they use mobile delivery, animated previews, gamification, virtual interaction and meaningful candidate feedback to deliver immersive, interactive and engaging assessment experiences that drive a wide range of human capital metrics.

These metrics go beyond traditional measures of quality of hire and enhanced job performance and focus on broader organisational outcomes including:

  • Candidate engagement
  • Offer acceptance
  • Candidate perceptions of the organisation (unsuccessful and successful)
  • Candidate motivation to buy from/use services (unsuccessful and successful)
  • Levels of self-selection
  • Candidate expectations of role/organisation
  • Employee awareness of values/ways of working Enhanced employability in local talent pools

Presenter: Chris Small, PSI

 

Big Wrong Data: Making AI (and big data) Work in Correction and Feedback Memories

Big Wrong Data includes the idea of creating an error and feedback memory to revise answers (texts, translations, fill gaps questions...) in a fast, efficient and objective way. The revisor can add new errors and feedback during the revision process, and apply them to all previous and future answers at once.

  • This will save the revisor a significant amount of time, and support him in the objective revision of the answers.
  • The approach is very innovative because it doesn't focus on the correct answers but on the errors produced!
  • The core of the application is a database that can be corrected, updated and supplemented at any time by every new answer and revision.

This database is actually a correction or revision “memory”: it allows the system to recognise errors in new answers and to suggest corrections and feedback automatically. Of course a human evaluator can still accept or reject those suggestions, but the bulk workload of typing and retyping the same corrections and feedback over and over again is now done immediately, rapidly and objectively by the Big Wrong Data engine.

Moreover, the platform allows the exchange (export and import in an XML format) or merge of revision memories which allows their reuse with new (similar or other) texts and with other "assessees". The platform operates fully online which makes collaboration between colleagues easy. Using neurolinguistic techniques, the correction memories can be language independent.

Presenters: Hendrik Kockaert, KU Leuven and Bert Wylin, Televic Education

 

Online Unobtrusive Game-Based Assessment of Competency

Serious games show promise as an implementation and delivery format for assessment of hard-to-measure latent traits, as games allow for real-time use of player actions and work products for online, unobtrusive assessment. Two possible approaches are the use of top-down confirmatory psychometric models that use quantified player actions and gameplay logs to calculate assessment criteria, and bottom-up exploratory techniques that look for meaningful behavioural patterns in order to make prediction and classification decisions. We believe these two approaches can be meaningfully combined: Bottom-up data mining can identify new assessment evidence to incorporate into top-down scoring models, which in turn can highlight data sensitivity and provide guidance for analysis. We present an early example of combined approaches that centers on the use of Bayesian inference networks to assess leadership competency, using as a case study the Mayor Game, a serious game that trains mayors in the Netherlands to deal with crisis situations. We introduce our application of the Evidence-Centered Design approach, show examples of top-down models based on literature analysis / expert input and bottom-up models based on Bayesian Search, and present results from the user studies we carried out for validation purposes.

Presenters: Dylan Schouten, Twenty University and Paul Porskamp, T-Xchange

 

Setting a Performance Standard for a High-stakes, LOFT-delivered and Globally Administered Medical Knowledge Examination

The Medical Council of Canada Evaluating Examination (MCCEE) is a computer-based multiple-choice exam that assesses basic medical knowledge for international medical graduates who wish to pursue postgraduate residency training in Canada. It is delivered through a vendor (Prometric) using a linear-on-the-fly testing (LOFT) model to over 80 countries worldwide, including 29 countries in Europe. LOFT provides enhanced test security through real-time assembly of a unique fixed-length test form for each examinee by selecting items from a large pool of pre-calibrated items with item exposure control. Consequently, it affords many benefits such as frequent test offerings, longer testing windows, and flexibility in scheduling for a global testing program like the MCCEE. Currently the MCCEE is offered five times a year with each session consisting of a two- to three-week testing window.

Among the many challenges involved in LOFT, one is to set a defensible pass score and apply it to all the unique test forms that examinees take. The purpose of this session is to share a recent experience in a standard-setting exercise aimed at establishing a pass score for the MCCEE, a high-stakes medical knowledge exam in a LOFT context. The presenter will discuss:

  • Issues encountered and measures taken to address them;
  • Validity evidence for the standard-setting process and the resulting pass score; and
  • Psychometric considerations when setting and implementing a pass score in a LOFT context.

Presenters: Fang Tian, Medical Council of Canada, Liane Patsula, Medical Council of Canada, and André De Champlain, Medical Council of Canada

 

The Use of Situational Judgement Tests to Select Human Capital

Situational judgement tests (SJTs) have a fairly long history within personnel selection. Use of SJTs in large-scale, high-stakes testing emerged after SJTs were reintroduced to the wider psychological research community in 1990. SJTs have particularly received attention as a tool for selection, such as for higher education admissions, because they provide a means of broadening the range of constructs that can be assessed through standardised processes. SJTs have been found to provide incremental validity over other admission selection measures for dimensions of the educational experience related to interpersonal skills and other non-academic constructs, and to have smaller subgroup differences than other measures. Therefore, they are also attractive for institutions wanting to widen access to education.

This session will provide an introduction to issues related to SJT use in high-stakes testing. Challenges in SJT development for high-stakes testing to be discussed include:

  • Creating alternate test forms;
  • Obtaining reliability estimates;
  • Selecting a measurement model.

Issues for exploration in the session include:

  • What criterion space of academic success is appropriate for an SJT?
  • What influences the SJT item format and scoring procedures adopted?
  • How should SJT scores be reported?
  • How should SJT results be factored into the overall admission decision?

Presenter: Belinda Brunner, Pearson VUE

 

Which Threats to Our industry, Currently in Plain View, Are We Choosing to Ignore?

Clayton Christensen at Harvard has defined a disruptive innovation as something that creates a new market and eventually disrupts an existing market. He observes that disruptive innovations tend to be produced by outsiders and entrepreneurs rather than by existing market-leading companies. Often, market leaders ignore disruptive innovations because they are not profitable enough and because their development can take scarce resources away from sustaining innovations that are needed to compete with current competition within their sector. For several reasons, established industry players find it difficult to spot potential disruptions because they tend to focus narrowly on the mainstream. Even when they do detect a radical change, established players often believe that the quality of their products and their technical standards will prevail in the eventual battle to retain current consumers. Then, too late, they realise that they have been lulled into a false sense of security. According to Christensen, advanced technologies are not necessarily the only source of disruption. Rather, it can be new business models that matter most along with novel combinations of existing technologies applied cleverly to new markets and networks.

In this snapshot session, the audience will be shown examples of disruptive innovation that could catch the assessment industry unawares and asked to identify those that pose the greatest threat to current standards, practices and products.

Presenter: Robert McHenry, Independent

 

For a complete list of sessions, visit our full programme.

 

Visit our full programme
27 - 29 September 2017 Grand Hotel Huis ter Duin Noordwijk, the Netherlands
27 - 29 September 2017
Grand Hotel Huis ter Duin
Noordwijk, The Netherlands