How to Design Company-Specific Assessments
Generic assessments lack the nuance your organization needs. Learn how to design assessments that truly predict who will succeed in your specific context.
Door Ingmar van Maurik · Founder & CEO, Making Moves
Why generic assessments fall short
Most organizations use standard assessments from major test publishers. Personality questionnaires, cognitive ability tests, and situational judgment tests that are identical for hundreds of companies. The problem: what predicts success at a consulting firm is fundamentally different from a tech startup or a healthcare institution.
Research confirms this. Generic assessments have an average predictive validity of 0.20 to 0.35 for job performance. Company-specific assessments achieve a validity of 0.45 to 0.65 — nearly double. That difference translates directly into better hires, less turnover, and higher productivity.
In this article, we explain step by step how to design assessments specific to your organization. With concrete methods, examples, and an implementation plan.
Step 1: Define success in your organization
Before you can design an assessment, you need to know what you are measuring. And that starts with a crucial question: what makes someone successful in your organization?
The success profile method
Analyze your top performers and discover what sets them apart:
Quantitative analysis:
Qualitative analysis:
From analysis to competencies
Based on your analysis, define 5-8 core competencies that predict success. These are not the generic competencies found in every HR handbook (teamwork, communication, problem-solving), but specific, measurable characteristics unique to your context.
Example for a tech scale-up:
|-----------|-----------|--------------|
Step 2: Choose the right assessment methods
There are different methods to measure competencies. The choice depends on what you measure and the candidate experience you want to deliver.
Customized cognitive assessments
Instead of a standard IQ test, design cognitive tasks relevant to the role:
Situational Judgment Tests (SJTs)
SJTs present realistic work scenarios and ask candidates how they would respond. The key is that scenarios are specific to your organization:
Generic scenario (not effective):
"A colleague misses a deadline. What do you do?"
Company-specific scenario (effective):
"You are working on a feature release. Two days before the deadline, you discover a significant performance problem. The product owner wants to launch as planned, the tech lead suggests postponing the release. The client expects the feature next week. How do you handle this?"
This scenario tests multiple competencies simultaneously: problem-solving, communication, handling pressure, and prioritization in a context your candidates immediately recognize.
Work samples
The most valid assessment method is a work sample: a task that simulates the actual job. Design work samples that are:
Structured behavioral interviews
Design interview questions that specifically measure your core competencies. Use the STAR method (Situation, Task, Action, Result) but with questions relevant to your context:
For the competency 'autonomous problem-solving':
"Describe a situation where you had to solve a technical problem without a clear approach or anyone available to help you. How did you tackle it and what was the result?"
Step 3: Build a scoring model
The scoring model determines how you translate assessment results into a prediction of success. This is where the science of psychometrics meets the practice of your organization.
Weighted scoring
Not every competency is equally important. Based on your success profile analysis, assign weights:
|-----------|--------|-----------|
Thresholds versus continuous scoring
You can work with thresholds (candidate must score at least X on each competency) or continuous scoring (total score determines ranking). The best approach combines both:
Calibration with historical data
If you already have data from previous hires, you can calibrate the scoring model. Analyze which assessment scores correlate with successful hires and adjust your weights and thresholds accordingly.
Step 4: Validate your assessment
An assessment that is not valid and reliable is worthless — or worse, it leads to systematically wrong decisions.
Content validity
Have domain experts evaluate whether the assessment measures what it should measure. Are the scenarios realistic? Do the questions actually measure the intended competencies?
Criterion validity
This is the most important form of validity: does the assessment actually predict future success? Measure this by:
1. Correlating assessment scores with performance data after 6 and 12 months
2. Comparing whether high-scoring candidates perform better than low-scoring ones
3. Calculating the predictive validity coefficient (aim for > 0.40)
Reliability
A reliable assessment gives consistent results:
Adverse impact analysis
Check whether your assessment does not discriminate against protected groups. Calculate the 4/5 rule: the selection ratio of each group must be at least 80% of the group with the highest selection ratio.
Step 5: Implement and iterate
Technical implementation
A company-specific assessment system requires technology flexible enough to host custom assessments. With your own hiring system, you can:
The iteration cycle
Assessment design is not a one-time project. It is a continuous process:
Quarter 1-2: Launch the first version, collect data
Quarter 3-4: First validation analysis, calibrate the scoring model
Year 2: Refine scenarios and questions based on data, add new competencies or remove irrelevant ones
Year 3+: The assessment becomes increasingly accurate through the growing dataset
Avoiding common mistakes
The business case
The investment in company-specific assessments pays back quickly:
|--------|-------------------|-----------------|------------|
For an organization doing 200 hires per year, the difference in turnover alone translates to savings of EUR 300,000 to EUR 600,000 per year (based on replacement costs of EUR 25,000-50,000 per bad hire).