Yoshinaga, Kyoko

写真a

Affiliation

Graduate School of Media and Governance (Shonan Fujisawa)

Position

Project Associate Professor (Non-tenured)

 

Books 【 Display / hide

  • Algorithmic Hiring Systems: Implications and Recommendations for Organisations and Policymakers

    Schloetzer J.D., Yoshinaga K., Yearbook of Socio Economic Constitutions, 2023

     View Summary

    Algorithms are becoming increasingly prevalent in the hiring process, as they are used to source, screen, interview, and select job applicants. This chapter examines the perspective of both organisations and policymakers about algorithmic hiring systems, drawing examples from Japan and the United States. The focus is on discussing the drivers underlying the rising demand for algorithmic hiring systems and four risks associated with their implementation: the privacy of job candidate data; the privacy of current and former employees’ workplace data; the potential for algorithmic hiring bias; and concerns surrounding ongoing oversight of algorithmically-assisted decision-making throughout the hiring process. These risks serve as the foundation for developing a risk management framework based on management control principles to facilitate dialogue within organisations to address the governance and management of such risks. The framework also identifies areas policymakers can focus on to help balance (1) granting organisations unfettered access to the personal and potentially sensitive data of job applicants and employees to develop hiring algorithms and (2) implementing strict data protection laws that safeguard individuals’ rights yet may impede innovation, and emphasises the need to establish an intra-governmental AI oversight and coordination function that tracks, analyses, and reports on adverse algorithmic incidents. The chapter concludes by highlighting seven recommendations to mitigate the risks organisations and policymakers face in the development, use, and oversight of algorithmic hiring.

Papers 【 Display / hide

  • Controllability as a Core Principle for AGI Governance and Safety

    Yoshinaga K.

    Lecture Notes in Networks and Systems 1556 LNNS   144 - 153 2026

    ISSN  23673370

     View Summary

    This paper explores the importance of ensuring AGI (Artificial General Intelligence) controllability and safety as AI systems advance from narrow AI (NAI) to more autonomous systems. AGI's ability to learn and make decisions independently introduces significant challenges, particularly in critical sectors like healthcare, finance, infrastructure, and also when it is embedded in a physical object, such as a robot. Traditional AI governance principles of transparency, explainability, and accountability become insufficient when dealing with more sophisticated AGI, which, in this paper, refers to AI systems that exhibits significantly higher autonomy than current systems, as these models are too complex to be fully understood or controlled by humans. Instead, the paper argues that “controllability” element should be the primary focus of AGI governance to prevent unintended consequences. The paper examines technological approaches such as control by design, fail-safes and redundancy mechanisms, formal verification, adversarial testing, adaptive ethical constraints and sandboxing, alongside institutional strategies including business continuity planning, continuous monitoring, AI ethics boards, and multi-layered audits. It stresses that a combination of technological, institutional, and regulatory measures is essential to ensure AGI remains safe and aligned with human intent. The paper concludes by emphasizing the need for interdisciplinary collaboration among engineers, ethicists, legal experts and policymakers and calls for AGI development to be guided by human values and governance frameworks to avoid catastrophic risks and ensure that AI serves societal benefits. *Please note that this research is preliminary and intended to serve as a basis for open discussion during the Special Session.