Hashimoto, Masahiro



School of Medicine, Department of Radiology (Diagnostic Radiology) (Shinanomachi)


Project Assistant Professor (Non-tenured)/Project Research Associate (Non-tenured)/Project Instructor (Non-tenured)

Academic Background 【 Display / hide

  • 2000.04

    Keio University, School of Medicine, 医学科

    Japan, University, Graduated

Licenses and Qualifications 【 Display / hide

  • Registered Information Security Specialist, 2017.04


Papers 【 Display / hide

  • CT screening for COVID-19 in asymptomatic patients before hospital admission

    Uchida S., Uno S., Uwamino Y., Hashimoto M., Matsumoto S., Obara H., Jinzaki M., Kitagawa Y., Hasegawa N.

    Journal of Infection and Chemotherapy (Journal of Infection and Chemotherapy)  27 ( 2 ) 232 - 236 2021.02

    ISSN  1341321X

     View Summary

    Introduction: In the novel coronavirus disease (COVID-19) pandemic era, it is essential to rule out COVID-19 effectively to prevent transmission in both communities and medical facilities. According to previous reports in high prevalence areas, CT screening may be useful in the diagnosis of COVID-19. However, the value of CT screening in low prevalence areas has scarcely been reported. Methods: This report examines the diagnostic efficacy of CT screening before admission to a hospital in Tokyo. We conducted a retrospective analysis at Keio University Hospital from April 6, 2020, through May 29, 2020. We set up an outpatient screening clinic on April 6 for COVID-19, administering both PCR with nasopharyngeal swabs and chest CT for all patients scheduled for surgery under general anesthesia. Results: A total of 292 asymptomatic patients were included in this study. There were three PCR-positive patients, and they all had negative CT findings, which revealed that both the sensitivity and positive predictive value of CT (PPV) were 0%. There were nine CT-positive patients; the specificity and the negative predictive value (NPV) were 96.9% and 98.9%, respectively. Conclusion: CT screening was not useful in low prevalence areas at this time in Tokyo, even with the inclusion of the most prevalent phase. Given that the utility of CT screening depends on disease prevalence, the criteria for performing CT screening based on the prevalence of COVID-19 should be established.

  • Unsupervised segmentation of COVID-19 infected lung clinical CT volumes using image inpainting and representation learning

    Zheng T., Oda M., Wang C., Moriya T., Hayashi Y., Otake Y., Hashimoto M., Akashi T., Mori M., Takabatake H., Natori H., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE (Progress in Biomedical Optics and Imaging - Proceedings of SPIE)  11596 2021

    ISSN  9781510640214

     View Summary

    This paper newly proposes a segmentation method of infected area for COVID-19 (Coronavirus Disease 2019) infected lung clinical CT volumes. COVID-19 spread globally from 2019 to 2020, causing the world to face a globally health crisis. It is desired to estimate severity of COVID-19, based on observing the infected area segmented from clinical computed tomography (CT) volume of COVID-19 patients. Given the lung field from a COVID-19 lung clinical CT volume as input, we desire an automated approach that could perform segmentation of infected area. Since labeling infected area for supervised segmentation needs a lot of labor, we propose a segmentation method without labeling of infected area. Our method refers to a baseline method utilizing representation learning and clustering. However, the baseline method is likely to segment anatomical structures with high H.U. (Houns field) intensity such as blood vessel into infected area. Aiming to solve this problem, we propose a novel pre-processing method that could transform high intensity anatomical structures into low intensity structures. This pre-processing method avoids high intensity anatomical structures to be mis-segmented into infected area. Given the lung field extracted from a CT volume, our method segment the lung field into normal tissue, ground GGO (ground glass opacity), and consolidation. Our method consists of three steps: 1) pulmonary blood vessel segmentation, 2) image inpainting of pulmonary blood vessel based on blood vessel segmentation result, and 3) segmentation of infected area. Compared to the baseline method, experimental results showed that our method contributes to the segmentation accuracy, especially on tubular structures such as blood vessels. Our method improved normalized mutual information score from 0.280 (the baseline method) to 0.394.

  • Performance of a deep learning-based identification system for esophageal cancer from CT images

    Takeuchi M., Seto T., Hashimoto M., Ichihara N., Morimoto Y., Kawakubo H., Suzuki T., Jinzaki M., Kitagawa Y., Miyata H., Sakakibara Y.

    Esophagus (Esophagus)   2021

    ISSN  16129059

     View Summary

    Background: Because cancers of hollow organs such as the esophagus are hard to detect even by the expert physician, it is important to establish diagnostic systems to support physicians and increase the accuracy of diagnosis. In recent years, deep learning-based artificial intelligence (AI) technology has been employed for medical image recognition. However, no optimal CT diagnostic system employing deep learning technology has been attempted and established for esophageal cancer so far. Purpose: To establish an AI-based diagnostic system for esophageal cancer from CT images. Materials and methods: In this single-center, retrospective cohort study, 457 patients with primary esophageal cancer referred to our division between 2005 and 2018 were enrolled. We fine-tuned VGG16, an image recognition model of deep learning convolutional neural network (CNN), for the detection of esophageal cancer. We evaluated the diagnostic accuracy of the CNN using a test data set including 46 cancerous CT images and 100 non-cancerous images and compared it to that of two radiologists. Results: Pre-treatment esophageal cancer stages of the patients included in the test data set were clinical T1 (12 patients), clinical T2 (9 patients), clinical T3 (20 patients), and clinical T4 (5 patients). The CNN-based system showed a diagnostic accuracy of 84.2%, F value of 0.742, sensitivity of 71.7%, and specificity of 90.0%. Conclusions: Our AI-based diagnostic system succeeded in detecting esophageal cancer with high accuracy. More training with vast datasets collected from multiples centers would lead to even higher diagnostic accuracy and aid better decision making.

  • Lung infection and normal region segmentation from CT volumes of COVID-19 cases

    Oda M., Hayashi Y., Otake Y., Hashimoto M., Akashi T., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE (Progress in Biomedical Optics and Imaging - Proceedings of SPIE)  11597 2021

    ISSN  9781510640238

     View Summary

    This paper proposes an automated segmentation method of infection and normal regions in the lung from CT volumes of COVID-19 patients. From December 2019, novel coronavirus disease 2019 (COVID-19) spreads over the world and giving significant impacts to our economic activities and daily lives. To diagnose the large number of infected patients, diagnosis assistance by computers is needed. Chest CT is effective for diagnosis of viral pneumonia including COVID-19. A quantitative analysis method of condition of the lung from CT volumes by computers is required for diagnosis assistance of COVID-19. This paper proposes an automated segmentation method of infection and normal regions in the lung from CT volumes using a COVID-19 segmentation fully convolutional network (FCN). In diagnosis of lung diseases including COVID-19, analysis of conditions of normal and infection regions in the lung is important. Our method recognizes and segments lung normal and infection regions in CT volumes. To segment infection regions that have various shapes and sizes, we introduced dense pooling connections and dilated convolutions in our FCN. We applied the proposed method to CT volumes of COVID-19 cases. From mild to severe cases of COVID-19, the proposed method correctly segmented normal and infection regions in the lung. Dice scores of normal and infection regions were 0.911 and 0.753, respectively.

  • Extremely imbalanced subarachnoid hemorrhage detection based on DenseNet-LSTM network with class-balanced loss and transfer learning

    Lu Z., Oda M., Hayashi Y., Hu T., Itoh H., Watadani T., Abe O., Hashimoto M., Jinzaki M., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE (Progress in Biomedical Optics and Imaging - Proceedings of SPIE)  11597 2021

    ISSN  9781510640238

     View Summary

    Subarachnoid Hemorrhage (SAH) detection is a critical, severe problem that confused clinical residents for a long time. With the rise of deep learning technologies, SAH detection made a significant breakthrough in recent ten years. Whereas, the performances are significantly degraded on imbalanced data, makes deep learning models have always suffered criticism. In this study, we present a DenseNet-LSTM network with Class-Balanced Loss and the transfer learning strategy to solve the SAH detection problem on an extremely imbalanced dataset. Compared to the previous works, the proposed framework not merely effectively integrate greyscale features the and spatial information from the consecutive CT scans, but also employ Class-Balanced loss and transfer learning to alleviate the adverse effects and broaden feature diversity respectively on an extreme SAH cases scarcity dataset, mimicking the actual situation of emergency departments. Comprehensive experiments are conducted on a dataset, consisted of 2,519 cases without hemorrhage cases and only 33 cases with SAH. Experimental results demonstrate the F-measure score of SAH detection achieved a remarkable improvement, the backbone DenseNet121 gained around 33% promotion after transfer learning, and on this basis, importing the Class-Balanced Loss and the LSTM structure, the F-measure score further increased 6.1% and 2.7% sequentially.

display all >>

Papers, etc., Registered in KOARA 【 Display / hide

Presentations 【 Display / hide

  • Investigating the Time to Distance Measurement in Ultrasonography and Supporting it with Deep Learning Techniques

    Masahiro Hashimoto, Yurie Kanauchi, Naoki Toda, Haque Hasnine, Takumi Seto, Yasufumi Sakakibara, Masahiro Jinzaki

    18th AOCR 2021 Spring in YOKOHAMA, 2021.04, Oral Presentation(general)

  • 深層学習を用いた超音波画像におけるランドマーク位置予測


    日本医用画像人工知能研究会, 2020.11, Oral Presentation(general)

  • 深層学習を用いた食道癌CT画像の分類


    日本医用画像人工知能研究会, 2020.11, Oral Presentation(general)

  • 読影業務における SYNAPSE SAI viewerの初期使用経験


    第56回日本医学放射線学会秋期臨床大会, 2020.10, Public discourse, seminar, tutorial, course, lecture and others

  • 画像診断領域におけるAI


    第39回 画像医学会 学術集会 , 2020.02, Symposium, Workshop, Panelist (nomination)

display all >>

Research Projects of Competitive Funds, etc. 【 Display / hide

  • Explainable AI for differentiating of renal tumor


    MEXT,JSPS, Grant-in-Aid for Scientific Research, 橋本 正弘, Grant-in-Aid for Early-Career Scientists , Principal Investigator


Courses Taught 【 Display / hide








Memberships in Academic Societies 【 Display / hide

  • Japanese Society of Interventional Radiology, 

  • Japan Radiological Society, 

  • Japan Association for Medical Informatics

  • Japanese Society of Nuclear Medicine

  • The Japan Society of Ultrasonics in Medicine