Mineshima, Koji

写真a

Affiliation

Faculty of Letters, Department of Humanities and Social Science (Philosophy) ( Mita )

Position

Professor

 

Papers 【 Display / hide

  • Abductive Reasoning with Syllogistic Forms in Large Language Models

    Abe H., Ando R., Morishita T., Ozeki K., Mineshima K., Okada M.

    Lecture Notes in Computer Science 15504 LNCS   3 - 17 2025

    ISSN  03029743

     View Summary

    Research in AI using Large-Language Models (LLMs) is rapidly evolving, and the comparison of their performance with human reasoning has become a key concern. Prior studies have indicated that LLMs and humans share similar biases, such as dismissing logically valid inferences that contradict common beliefs. However, criticizing LLMs for these biases might be unfair, considering our reasoning not only involves formal deduction but also abduction, which draws tentative conclusions from limited information. Abduction can be regarded as the inverse form of syllogism in its basic structure, that is, a process of drawing a minor premise from a major premise and conclusion. This paper explores the accuracy of LLMs in abductive reasoning by converting a syllogistic dataset into one suitable for abduction. It aims to investigate whether the state-of-the-art LLMs exhibit biases in abduction and to identify potential areas for improvement, emphasizing the importance of contextualized reasoning beyond formal deduction. This investigation is vital for advancing the understanding and application of LLMs in complex reasoning tasks, offering insights into bridging the gap between machine and human cognition.

  • Is Partial Linguistic Information Sufficient for Discourse Connective Disambiguation? A Case Study of Concession

    Sato T., Kubota A., Mineshima K.

    Proceedings of the Annual Meeting of the Association for Computational Linguistics 4   977 - 990 2025

    ISSN  0736587X

     View Summary

    Discourse relations are sometimes explicitly conveyed by specific connectives. However, some connectives can signal multiple discourse relations; in such cases, disambiguation is necessary to determine which relation is intended. This task is known as discourse connective disambiguation (Pitler and Nenkova, 2009), and particular attention is often given to connectives that can convey both CONCESSION and other relations (e.g., SYNCHRONOUS). In this study, we conducted experiments to analyze which linguistic features play an important role in the disambiguation of polysemous connectives in Japanese. A neural language model (BERT) was fine-tuned using inputs from which specific linguistic features (e.g., word order, specific lexicon, etc.) had been removed. We analyzed which linguistic features affect disambiguation by comparing the model’s performance. Our results show that even after performing drastic removal, such as deleting one of the two arguments that constitute the discourse relation, the model’s performance remained relatively robust. However, the removal of certain lexical items or words belonging to specific lexical categories significantly degraded disambiguation performance, highlighting their importance in identifying the intended discourse relation.

  • Building a Large Dataset of Human-Generated Captions for Science Diagrams

    Sato Y., Suzuki A., Mineshima K.

    Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics 14981 LNAI   393 - 401 2024

    ISSN  03029743

     View Summary

    Human-generated captions for photographs, particularly snapshots, have been extensively collected in recent AI research. They play a crucial role in the development of systems capable of multimodal information processing that combines vision and language. Recognizing that diagrams may serve a distinct function in thinking and communication compared to photographs, we shifted our focus from snapshot photographs to diagrams. We provided humans with text-free diagrams and collected data on the captions they generated. The diagrams were sourced from AI2D-RST, a subset of AI2D. This subset annotates the AI2D image dataset of diagrams from elementary school science textbooks with types of diagrams. We mosaicked all textual elements within the diagram images to ensure that human annotators focused solely on the diagram’s visual content when writing a sentence about what the image expresses. For the 831 images in our dataset, we obtained caption data from at least three individuals per image. To the best of our knowledge, this dataset is the first collection of caption data specifically for diagrams.

  • Can Machines and Humans Use Negation When Describing Images?

    Sato Y., Mineshima K.

    Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics 14522 LNCS   39 - 47 2024

    ISSN  03029743

     View Summary

    Can negation be depicted? It has been claimed in various areas, including philosophy, cognitive science, and AI, that depicting negation through visual expressions such images and pictures is challenging. Recent empirical findings have shown that humans can indeed understand certain images as expressing negation, whereas this ability is not exhibited by machine learning models trained on image data. To elucidate the computational ability underlying the understanding of negation in images, this study first focuses on the image captioning task, specifically the performance of models pre-trained on large linguistic and image datasets for generating text from images. Our experiment demonstrates that a state-of-the-art model achieves some success in generating consistent captions from images, particularly in photographs rather than illustrations. However, when it comes to generating captions containing negation from images, the model is not as proficient as humans. To further investigate the performance of machine learning models in a more controlled setting, we conducted an additional analysis using a Visual Question Answering (VQA) task. This task enables us to specify where in the image the model should focus its attention when answering a question. As a result of this setting, the model’s performance was improved. These results will shed light on the disparities in the attentional focus between humans and machine learning models.

  • Can Euler Diagrams Improve Syllogistic Reasoning in Large Language Models?

    Ando R., Ozeki K., Morishita T., Abe H., Mineshima K., Okada M.

    Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics 14981 LNAI   232 - 248 2024

    ISSN  03029743

     View Summary

    In recent years, research on large language models (LLMs) has been advancing rapidly, making the evaluation of their reasoning abilities a crucial issue. Within cognitive science, there has been extensive research on human reasoning biases. It is widely observed that humans often use graphical representations as auxiliary tools during inference processes to avoid reasoning biases. However, currently, the evaluation of LLMs’ reasoning abilities has largely focused on linguistic inferences, with insufficient attention given to inferences using diagrams. In this study, we concentrate on syllogisms, a basic form of logical reasoning, and evaluate the reasoning abilities of LLMs supplemented by Euler diagrams. We systematically investigate how accurately LLMs can perform logical reasoning when using diagrams as auxiliary input and whether they exhibit similar reasoning biases to those of humans. Our findings indicate that, overall, providing diagrams as auxiliary input tends to improve models’ performance, including in problems that show reasoning biases, but the effect varies depending on the conditions, and the improvement in accuracy is not as high as that seen in humans. We present results from experiments conducted under multiple conditions, including a Chain-of-Thought setting, to highlight where there is room to improve logical diagrammatic reasoning abilities of LLMs.

display all >>

Papers, etc., Registered in KOARA 【 Display / hide

display all >>

Reviews, Commentaries, etc. 【 Display / hide

  • Preface

    Yada K., Takama Y., Mineshima K., Satoh K.

    Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics 13856 LNAI   v - vi 2023

    ISSN  03029743

  • Preface

    Okazaki N., Yada K., Satoh K., Mineshima K.

    Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics 12758 LNAI   v - vi 2021

    ISSN  03029743

Research Projects of Competitive Funds, etc. 【 Display / hide

  • Interdisciplinary research on reasoning based on a unified perspective of logic and deep learning

    2024.04
    -
    2028.03

    基盤研究(B), Principal investigator

  • 証明論と型理論に基づく自然言語の形式意味論の新しい枠組み

    2021.04
    -
    2024.03

    MEXT,JSPS, Grant-in-Aid for Scientific Research, Grant-in-Aid for Scientific Research (C), Principal investigator

  • 証明論と図形論理の手法に基づく自然言語推論の統合的分析

    2017.04
    -
    2022.03

    MEXT,JSPS, Grant-in-Aid for Scientific Research, Grant-in-Aid for Early-Career Scientists , Principal investigator

 

Courses Taught 【 Display / hide

  • INTENSIVE SEMINAR: PHILOSOPHY 1

    2026

  • RESEARCH METHODS FOR THE ARTS AND HUMANITIES 2

    2026

  • INTENSIVE SEMINAR: PHILOSOPHY 3

    2026

  • INTENSIVE SEMINAR: PHILOSOPHY 2

    2026

  • CONTEMPORARY LOGIC ISSUES 1

    2026

display all >>