Algorithmic decision support systems (ADSS) are now embedded in managerial work, promising faster, more consistent, and more data-driven choices. However, as organizations intensify AI deployment, an emerging risk is algorithmic trust fatigue—a progressive state in which sustained reliance on AI recommendations reduces managers’ critical scrutiny, weakens discretion, and diminishes independent judgment. This paper examines how excessive AI-driven decision support can erode managerial judgment in modern organizations through interacting psychological and organizational mechanisms. Drawing on research on automation bias, cognitive offloading, trust calibration, and deskilling, it explains why managers may default to algorithmic outputs under time pressure, uncertainty, or accountability ambiguity, gradually shifting from decision-makers to validators of machine recommendations. At the organizational level, routinized AI use can foster responsibility diffusion, over-standardization of decision processes, and capability loss, making firms more vulnerable during novel or volatile conditions where models generalize poorly. The paper synthesizes empirical findings and illustrative case evidence across business domains (e.g., HR analytics, finance, operations) to show how overreliance can produce systematic errors, ethical blind spots, and reduced adaptive capacity—even when ADSS improves efficiency in the short term. Finally, it proposes governance and design interventions to preserve human judgment: AI literacy and critical-thinking training, human-in-the-loop accountability, explainable and uncertainty-aware interfaces, cognitive forcing functions that require reflection, and monitoring metrics for overreliance. Overall, the study argues that managerial effectiveness in AI-enabled firms depends not on maximizing automation, but on sustaining calibrated trust and maintaining human agency as the final locus of responsibility and sense-making.