Snapshot Sessions are thirty minute interactive breakout sessions. This new session type allows the presenters to get information to the audience in a quicker format than a full breakout session.
The global talent assessment market is experiencing some of its most fundamental changes since the advent of computer based delivery over 20 years ago. Emerging technologies are changing the focus for many organisations as they seek to roll out global talent solutions which appeal to a wide range of applicants, manage different technological expectations and deliver a candidate experience that resonates with both their employer and consumer brands.
This Snapshot session will outline the evolution of technology driven assessments and the increasing impact that they can deliver to the end to end candidate experience. The session will highlight a range of client case studies with Virgin Media, Formaposte, Emirates and Jaguar Landrover, and examine how they use mobile delivery, animated previews, gamification, virtual interaction and meaningful candidate feedback to deliver immersive, interactive and engaging assessment experiences that drive a wide range of human capital metrics.
These metrics go beyond traditional measures of quality of hire and enhanced job performance and focus on broader organisational outcomes including:
Presenter: Chris Small, PSI
Big Wrong Data includes the idea of creating an error and feedback memory to revise answers (texts, translations, fill gaps questions...) in a fast, efficient and objective way. The revisor can add new errors and feedback during the revision process, and apply them to all previous and future answers at once.
This database is actually a correction or revision “memory”: it allows the system to recognise errors in new answers and to suggest corrections and feedback automatically. Of course a human evaluator can still accept or reject those suggestions, but the bulk workload of typing and retyping the same corrections and feedback over and over again is now done immediately, rapidly and objectively by the Big Wrong Data engine.
Moreover, the platform allows the exchange (export and import in an XML format) or merge of revision memories which allows their reuse with new (similar or other) texts and with other "assessees". The platform operates fully online which makes collaboration between colleagues easy. Using neurolinguistic techniques, the correction memories can be language independent.
Presenters: Hendrik Kockaert, KU Leuven and Bert Wylin, Televic Education
Serious games show promise as an implementation and delivery format for assessment of hard-to-measure latent traits, as games allow for real-time use of player actions and work products for online, unobtrusive assessment. Two possible approaches are the use of top-down confirmatory psychometric models that use quantified player actions and gameplay logs to calculate assessment criteria, and bottom-up exploratory techniques that look for meaningful behavioural patterns in order to make prediction and classification decisions. We believe these two approaches can be meaningfully combined: Bottom-up data mining can identify new assessment evidence to incorporate into top-down scoring models, which in turn can highlight data sensitivity and provide guidance for analysis. We present an early example of combined approaches that centers on the use of Bayesian inference networks to assess leadership competency, using as a case study the Mayor Game, a serious game that trains mayors in the Netherlands to deal with crisis situations. We introduce our application of the Evidence-Centered Design approach, show examples of top-down models based on literature analysis / expert input and bottom-up models based on Bayesian Search, and present results from the user studies we carried out for validation purposes.
Presenters: Dylan Schouten, Twenty University and Paul Porskamp, T-Xchange
The Medical Council of Canada Evaluating Examination (MCCEE) is a computer-based multiple-choice exam that assesses basic medical knowledge for international medical graduates who wish to pursue postgraduate residency training in Canada. It is delivered through a vendor (Prometric) using a linear-on-the-fly testing (LOFT) model to over 80 countries worldwide, including 29 countries in Europe. LOFT provides enhanced test security through real-time assembly of a unique fixed-length test form for each examinee by selecting items from a large pool of pre-calibrated items with item exposure control. Consequently, it affords many benefits such as frequent test offerings, longer testing windows, and flexibility in scheduling for a global testing program like the MCCEE. Currently the MCCEE is offered five times a year with each session consisting of a two- to three-week testing window.
Among the many challenges involved in LOFT, one is to set a defensible pass score and apply it to all the unique test forms that examinees take. The purpose of this session is to share a recent experience in a standard-setting exercise aimed at establishing a pass score for the MCCEE, a high-stakes medical knowledge exam in a LOFT context. The presenter will discuss:
Presenters: Fang Tian, Medical Council of Canada, Liane Patsula, Medical Council of Canada, and André De Champlain, Medical Council of Canada
Situational judgement tests (SJTs) have a fairly long history within personnel selection. Use of SJTs in large-scale, high-stakes testing emerged after SJTs were reintroduced to the wider psychological research community in 1990. SJTs have particularly received attention as a tool for selection, such as for higher education admissions, because they provide a means of broadening the range of constructs that can be assessed through standardised processes. SJTs have been found to provide incremental validity over other admission selection measures for dimensions of the educational experience related to interpersonal skills and other non-academic constructs, and to have smaller subgroup differences than other measures. Therefore, they are also attractive for institutions wanting to widen access to education.
This session will provide an introduction to issues related to SJT use in high-stakes testing. Challenges in SJT development for high-stakes testing to be discussed include:
Issues for exploration in the session include:
Presenter: Belinda Brunner, Pearson VUE
Clayton Christensen at Harvard has defined a disruptive innovation as something that creates a new market and eventually disrupts an existing market. He observes that disruptive innovations tend to be produced by outsiders and entrepreneurs rather than by existing market-leading companies. Often, market leaders ignore disruptive innovations because they are not profitable enough and because their development can take scarce resources away from sustaining innovations that are needed to compete with current competition within their sector. For several reasons, established industry players find it difficult to spot potential disruptions because they tend to focus narrowly on the mainstream. Even when they do detect a radical change, established players often believe that the quality of their products and their technical standards will prevail in the eventual battle to retain current consumers. Then, too late, they realise that they have been lulled into a false sense of security. According to Christensen, advanced technologies are not necessarily the only source of disruption. Rather, it can be new business models that matter most along with novel combinations of existing technologies applied cleverly to new markets and networks.
In this snapshot session, the audience will be shown examples of disruptive innovation that could catch the assessment industry unawares and asked to identify those that pose the greatest threat to current standards, practices and products.
Presenter: Robert McHenry, Independent
For a complete list of sessions, visit our full programme.
Visit our full programme