Abstract:With the rapid integration of artificial intelligence into human resource management, algorithmic layoffs are increasingly being implemented in organizational downsizing. In such practices, an intelligent system evaluates employees and produces termination decisions. Prior research on algorithmic decision-making in HR has focused mainly on recruitment, screening, and performance appraisal, while the psychological consequences of algorithmic layoffs remain underexplored. Layoffs are highly negative, identity-threatening, and typically irreversible, so they may elicit stronger fairness concerns and more intense attribution processes than other HR outcomes. Human-led termination decisions are often perceived as vulnerable to subjective bias, favoritism, and emotional influence. In contrast, algorithmic decisions are frequently framed as rule-based and impartial, which may increase perceived procedural justice even when the outcome is unfavorable. Procedural justice theory indicates that perceptions of fairness in decision procedures shape how people accept and respond to negative outcomes. Attribution theory further suggests that when a procedure appears fair and objective, individuals may infer that the outcome reflects their own qualities, potentially strengthening internal attributions and self-blame. This study compares perceived fairness between algorithmic and human layoffs, tests whether perceived objectivity explains how decision-maker type affects fairness perception, and examines whether fairness perception explains how decision-maker type affects internal attribution tendencies. By focusing on a high-stakes negative context, the study clarifies a potentially paradoxical effect of algorithmic management, namely that algorithmic layoffs may reduce perceived unfairness while simultaneously intensifying self-responsibility explanations that can shape recovery and well-being.
Two scenario-based experiments were conducted using a single-factor between-subjects design. In both experiments, the independent variable was the layoff decision-maker, operationalized as an algorithmic system versus a human manager, and participants were randomly assigned to one of the two conditions. Participants were asked to imagine themselves as an employee who was terminated during a company restructuring. Experiment 1 examined whether decision-maker type influenced perceived procedural fairness and whether perceived objectivity explained this relationship by testing whether decision-maker type affected objectivity judgments, which in turn affected fairness perception. A total of 174 undergraduates participated. Participants read a brief layoff scenario in which the termination decision was described as being made either by an HR AI system or by an HR manager. After reading the scenario, participants completed measures of perceived fairness and perceived objectivity. Experiment 2 recruited 172 undergraduates, used the same scenario manipulation with a new sample, and added a measure of internal attribution tendency, assessing the extent to which participants explained the layoff as due to their own abilities, effort, or personal characteristics. Analyses were conducted in SPSS 27.0 using independent-samples t-tests for main effects and PROCESS Model 4 with 5,000 bootstrap samples for mediation, controlling for gender and age.
Experiment 1 showed higher perceived fairness for algorithmic layoffs than for human layoffs (algorithmic: M = 3.71; human: M = 3.32; t = 2.19, p = 0.03), supporting Hypothesis 1. Mediation analyses indicated that perceived objectivity fully mediated the effect of decision-maker type on fairness perception (indirect effect = 0.38, 95% CI = [0.19, 0.58]), accounting for about 71% of the total effect. Experiment 2 found stronger internal attribution in the algorithmic condition than in the human condition (algorithmic: M = 5.20; human: M = 4.48; t = 3.05, p = 0.03), supporting Hypothesis 3. Perceived fairness fully mediated the effect of decision-maker type on internal attribution (indirect effect = 0.30, 95% CI = [0.09, 0.56]), accounting for about 42% of the total effect. Overall, the results supported a coherent mechanism in which algorithmic (vs. human) layoffs increased perceived objectivity and fairness, and these perceptions were associated with stronger internal attribution.
Across two experiments, algorithmic layoffs produced higher perceived fairness and stronger internal attribution tendencies compared with human layoffs. The findings extend procedural justice research into algorithmic management by showing that an impersonal, rule-based decision procedure can reduce perceived bias and enhance perceived legitimacy even when employees face an unfavorable, irreversible outcome. At the same time, the results reveal a psychological cost of this legitimacy effect: when the process is evaluated as fair and objective, individuals may be more likely to interpret the layoff as reflecting their own shortcomings, thereby increasing self-responsibility explanations that may hinder recovery for some people. These findings highlight the importance of evaluating algorithmic management not only in terms of efficiency, but also in terms of downstream psychological consequences in high-stakes contexts.
The results offer practical guidance for organizations considering algorithmic systems in downsizing. Algorithmic procedures may reduce perceived unfairness and interpersonal conflict by increasing perceived objectivity and procedural fairness, which can facilitate restructuring and improve acceptance of difficult decisions. However, organizations should also anticipate that higher fairness perceptions may encourage stronger internal attributions, leading some terminated employees to blame themselves more intensely. To balance efficiency and humane treatment, implementation should include transparency about evaluation criteria, routine bias audits, and meaningful human oversight that allows review and correction of questionable outputs. Communication and offboarding practices should explicitly emphasize that layoffs often reflect structural constraints and organizational needs rather than purely individual failure, and organizations should provide concrete support such as career counseling, retraining resources, and psychological services to mitigate harmful self-blame. At a broader level, policymakers and industry bodies can require impact assessments, auditing standards, and accountability procedures to ensure that algorithmic HR tools do not create hidden discrimination or disproportionate psychological harm, supporting responsible algorithmic management that protects employee dignity while maintaining organizational effectiveness.
孙忠强,梁秦畅,马甜,赖朵朵,鲍依静,杜健. 算法裁员对公平感知及内归因倾向的影响[J]. 应用心理学, 0, (): 1-.
SUN Zhongqiang,LIANG Qinchang,MA Tian1 LAI Duoduo,BAO Yijing,DU Jian. The Impact of Algorithmic Layoffs on Perceived Fairness and Internal Attribution Tendencies. 应用心理学, 0, (): 1-.