峯島 宏次 ( ミネシマ コウジ )

Mineshima, Koji

写真a

所属(所属キャンパス)

文学部 人文社会学科(哲学系) ( 三田 )

職名

教授

 

論文 【 表示 / 非表示

  • Abductive Reasoning with Syllogistic Forms in Large Language Models

    Abe H., Ando R., Morishita T., Ozeki K., Mineshima K., Okada M.

    Lecture Notes in Computer Science 15504 LNCS   3 - 17 2025年

    ISSN  03029743

     概要を見る

    Research in AI using Large-Language Models (LLMs) is rapidly evolving, and the comparison of their performance with human reasoning has become a key concern. Prior studies have indicated that LLMs and humans share similar biases, such as dismissing logically valid inferences that contradict common beliefs. However, criticizing LLMs for these biases might be unfair, considering our reasoning not only involves formal deduction but also abduction, which draws tentative conclusions from limited information. Abduction can be regarded as the inverse form of syllogism in its basic structure, that is, a process of drawing a minor premise from a major premise and conclusion. This paper explores the accuracy of LLMs in abductive reasoning by converting a syllogistic dataset into one suitable for abduction. It aims to investigate whether the state-of-the-art LLMs exhibit biases in abduction and to identify potential areas for improvement, emphasizing the importance of contextualized reasoning beyond formal deduction. This investigation is vital for advancing the understanding and application of LLMs in complex reasoning tasks, offering insights into bridging the gap between machine and human cognition.

  • Is Partial Linguistic Information Sufficient for Discourse Connective Disambiguation? A Case Study of Concession

    Sato T., Kubota A., Mineshima K.

    Proceedings of the Annual Meeting of the Association for Computational Linguistics 4   977 - 990 2025年

    ISSN  0736587X

     概要を見る

    Discourse relations are sometimes explicitly conveyed by specific connectives. However, some connectives can signal multiple discourse relations; in such cases, disambiguation is necessary to determine which relation is intended. This task is known as discourse connective disambiguation (Pitler and Nenkova, 2009), and particular attention is often given to connectives that can convey both CONCESSION and other relations (e.g., SYNCHRONOUS). In this study, we conducted experiments to analyze which linguistic features play an important role in the disambiguation of polysemous connectives in Japanese. A neural language model (BERT) was fine-tuned using inputs from which specific linguistic features (e.g., word order, specific lexicon, etc.) had been removed. We analyzed which linguistic features affect disambiguation by comparing the model’s performance. Our results show that even after performing drastic removal, such as deleting one of the two arguments that constitute the discourse relation, the model’s performance remained relatively robust. However, the removal of certain lexical items or words belonging to specific lexical categories significantly degraded disambiguation performance, highlighting their importance in identifying the intended discourse relation.

  • Building a Large Dataset of Human-Generated Captions for Science Diagrams

    Sato Y., Suzuki A., Mineshima K.

    Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics 14981 LNAI   393 - 401 2024年

    ISSN  03029743

     概要を見る

    Human-generated captions for photographs, particularly snapshots, have been extensively collected in recent AI research. They play a crucial role in the development of systems capable of multimodal information processing that combines vision and language. Recognizing that diagrams may serve a distinct function in thinking and communication compared to photographs, we shifted our focus from snapshot photographs to diagrams. We provided humans with text-free diagrams and collected data on the captions they generated. The diagrams were sourced from AI2D-RST, a subset of AI2D. This subset annotates the AI2D image dataset of diagrams from elementary school science textbooks with types of diagrams. We mosaicked all textual elements within the diagram images to ensure that human annotators focused solely on the diagram’s visual content when writing a sentence about what the image expresses. For the 831 images in our dataset, we obtained caption data from at least three individuals per image. To the best of our knowledge, this dataset is the first collection of caption data specifically for diagrams.

  • Can Machines and Humans Use Negation When Describing Images?

    Sato Y., Mineshima K.

    Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics 14522 LNCS   39 - 47 2024年

    ISSN  03029743

     概要を見る

    Can negation be depicted? It has been claimed in various areas, including philosophy, cognitive science, and AI, that depicting negation through visual expressions such images and pictures is challenging. Recent empirical findings have shown that humans can indeed understand certain images as expressing negation, whereas this ability is not exhibited by machine learning models trained on image data. To elucidate the computational ability underlying the understanding of negation in images, this study first focuses on the image captioning task, specifically the performance of models pre-trained on large linguistic and image datasets for generating text from images. Our experiment demonstrates that a state-of-the-art model achieves some success in generating consistent captions from images, particularly in photographs rather than illustrations. However, when it comes to generating captions containing negation from images, the model is not as proficient as humans. To further investigate the performance of machine learning models in a more controlled setting, we conducted an additional analysis using a Visual Question Answering (VQA) task. This task enables us to specify where in the image the model should focus its attention when answering a question. As a result of this setting, the model’s performance was improved. These results will shed light on the disparities in the attentional focus between humans and machine learning models.

  • Can Euler Diagrams Improve Syllogistic Reasoning in Large Language Models?

    Ando R., Ozeki K., Morishita T., Abe H., Mineshima K., Okada M.

    Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics 14981 LNAI   232 - 248 2024年

    ISSN  03029743

     概要を見る

    In recent years, research on large language models (LLMs) has been advancing rapidly, making the evaluation of their reasoning abilities a crucial issue. Within cognitive science, there has been extensive research on human reasoning biases. It is widely observed that humans often use graphical representations as auxiliary tools during inference processes to avoid reasoning biases. However, currently, the evaluation of LLMs’ reasoning abilities has largely focused on linguistic inferences, with insufficient attention given to inferences using diagrams. In this study, we concentrate on syllogisms, a basic form of logical reasoning, and evaluate the reasoning abilities of LLMs supplemented by Euler diagrams. We systematically investigate how accurately LLMs can perform logical reasoning when using diagrams as auxiliary input and whether they exhibit similar reasoning biases to those of humans. Our findings indicate that, overall, providing diagrams as auxiliary input tends to improve models’ performance, including in problems that show reasoning biases, but the effect varies depending on the conditions, and the improvement in accuracy is not as high as that seen in humans. We present results from experiments conducted under multiple conditions, including a Chain-of-Thought setting, to highlight where there is room to improve logical diagrammatic reasoning abilities of LLMs.

全件表示 >>

KOARA(リポジトリ)収録論文等 【 表示 / 非表示

全件表示 >>

総説・解説等 【 表示 / 非表示

  • Preface

    Yada K., Takama Y., Mineshima K., Satoh K.

    Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics 13856 LNAI   v - vi 2023年

    ISSN  03029743

  • Preface

    Okazaki N., Yada K., Satoh K., Mineshima K.

    Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics 12758 LNAI   v - vi 2021年

    ISSN  03029743

競争的研究費の研究課題 【 表示 / 非表示

  • 論理と深層学習の統合的視点に基づくリーズニングの学際的研究

    2024年04月
    -
    2028年03月

    峯島 宏次, 基盤研究(B), 補助金,  研究代表者

  • 証明論と型理論に基づく自然言語の形式意味論の新しい枠組み

    2021年04月
    -
    2024年03月

    文部科学省・日本学術振興会, 科学研究費助成事業, 峯島 宏次, 基盤研究(C), 補助金,  研究代表者

  • 証明論と図形論理の手法に基づく自然言語推論の統合的分析

    2017年04月
    -
    2022年03月

    文部科学省・日本学術振興会, 科学研究費助成事業, 若手研究(B), 補助金,  研究代表者

 

担当授業科目 【 表示 / 非表示

  • 哲学研究会Ⅰ

    2026年度

  • 人文学研究の方法論Ⅱ

    2026年度

  • 哲学研究会Ⅲ

    2026年度

  • 哲学研究会Ⅱ

    2026年度

  • 現代論理学の諸問題Ⅰ

    2026年度

全件表示 >>