A Study on the Distorted Working Hours Effect of Platform Algorithm Discrimination and Its Legal Regulation: Based on a Composite Governance Framework of “Risk Grading + Algorithm Accountability”
Keywords:
Algorithmic Discrimination; Working Hours; Legal Regulation; Labor Rights; Platform EconomyAbstract
This study centres on the distorted impact of platform algorithmic management on workers' hours in the digital economy, unravelling the mechanisms of algorithmic discrimination and its legal challenges. By devising a “task-driven - cognitive – feedback” model and employing legal text analysis, case analysis, and comparative studies, it finds that algorithmic management, through data bias, black-box decisions, and feedback loops, breaches workers' equal employment rights in three dimensions: access, process, and outcome. Empirical evidence indicates that algorithms employ dynamic pricing, credit penalties, and reduced working hours, resulting in excessive daily working hours and significant distortions in working time allocation. The current labour legal system shows structural flaws like norm failure, rule lag, and redress weakness when tackling algorithmic control, especially in standard working hours, special-hour approval, and technical evidence-based disputes.
Drawing on the EU's “risk-based” governance and the US's “algorithmic accountability”, this study proposes a combined regulatory framework of “risk-based + algorithmic accountability”, suggesting improving working hour rules, instituting algorithmic filing and review, adjusting the burden of proof, and strengthening inter-departmental regulation to balance technical efficiency and labour equity. This research innovatively links algorithmic transparency with digital labour rights protection, offering theoretical and practical insights into China's digital labour legal system and being globally algorithmically governance-relevant.