The health crisis arising from the coronavirus pandemic has revamped the debate over algorithmic surveillance. As several countries have released artificial intelligence (AI)-enabled applications to complement manual contact tracing, it is worth critically assessing the ethical concerns of contact tracing via algorithm, particularly in light of the heterogeneous policy response as well as the severity of the outbreak of Covid-19 within the European Union (EU). The contribution considers the human rights implications of contact tracing against well-established challenges in the use of automated decision-making systems, identifying problematic aspects of the pandemic digital surveillance. The analysis scrutinizes theoretical challenges and provides evidence through selected case studies within the EU, displaying shortcomings such as privacy and data protection violations, bias and/or discriminatory outcomes, limits to accessibility. Even if the relevant EU legal system is far more advanced comparatively, the analysis shows how – especially in the arena of unofficial tools which may however shape the ability to access essential services – problematic use cases remain. The discussion is extended to the EU proposal for an AI Act, indicating that as it stands it may provide insufficient safeguards, especially in the domain of biometric systems.