Case study loading
Trust in Automation.

A deeper research narrative, not just a portfolio tile.

0
loading
Flagship case study

Trust in Automation

A systematic research project investigating how the Trust in Automated Systems Scale is modified across studies, and how those changes affect validity, comparison, and interpretation.

Type: research Focus: scale validity Role: graduate research assistant
Overview

Why this project matters

Trust in automation is a core topic in human factors, but the strength of that research depends on how trust is measured. If a validated scale is modified across studies without careful attention, comparisons become weaker and interpretation becomes less certain.

Question

How consistently is the scale used across studies?

Concern

Modified items can weaken validity and comparability.

Value

More rigorous measurement leads to more credible research.

Problem

Measurement drift is easy to overlook.

In automation research, it is common for scales to be adapted to fit context, language, or study structure. But those adaptations are not always neutral. Small shifts in wording, response format, or framing can change what is actually being measured.

  • Modified scales make cross-study comparison harder.
  • Researchers may assume validity transfers automatically when it does not.
  • Interpretation becomes less precise when instruments drift from their validated form.
Method

Systematic review with close methodological attention.

My role focused on reviewing the literature, identifying how the scale was used, documenting where modifications occurred, and helping interpret what those changes meant for methodological strength.

  • Reviewed studies using the trust scale in automation contexts.
  • Tracked wording changes, translation differences, and response-scale adaptations.
  • Compared use patterns to the original validated instrument.
  • Worked toward a structured synthesis of methodological implications.
Findings

The deeper insight is not only about trust.

The project sharpened an important lesson: validated instruments carry design logic. When that logic is altered casually, the meaning of the resulting data can change as well.

Wording

Small wording changes can shift nuance and affect interpretation.

Format

Scale format changes can alter response behavior and comparability.

Context

Context-specific adaptation can blur what the instrument originally measured.

Implications

What this says about your value as a researcher.

This project is a strong portfolio anchor because it shows a research mindset that is precise, critical, and sensitive to how design choices shape knowledge. It positions you as someone who does not just use measures, but understands what makes them trustworthy.

measurement rigor critical literature review automation trust research synthesis
Reflection

Why this belongs at the center of the site.

It is the clearest expression of your identity: thoughtful, methodical, and interested in how people interact with increasingly automated environments. It also translates well to both academic and industry audiences because the core issue — whether we are measuring the right thing, in the right way — matters everywhere.