橋本 正弘 (ハシモト マサヒロ)

Hashimoto, Masahiro

写真a

所属(所属キャンパス)

医学部 放射線科学教室(診断) (信濃町)

職名

専任講師

学歴 【 表示 / 非表示

  • 2000年04月
    -
    2006年03月

    慶應義塾大学, 医学部, 医学科

    大学, 卒業

免許・資格 【 表示 / 非表示

  • 情報処理安全確保支援士, 2017年04月

 

論文 【 表示 / 非表示

  • CT screening for COVID-19 in asymptomatic patients before hospital admission

    Uchida S., Uno S., Uwamino Y., Hashimoto M., Matsumoto S., Obara H., Jinzaki M., Kitagawa Y., Hasegawa N.

    Journal of Infection and Chemotherapy (Journal of Infection and Chemotherapy)  27 ( 2 ) 232 - 236 2021年02月

    ISSN  1341321X

     概要を見る

    Introduction: In the novel coronavirus disease (COVID-19) pandemic era, it is essential to rule out COVID-19 effectively to prevent transmission in both communities and medical facilities. According to previous reports in high prevalence areas, CT screening may be useful in the diagnosis of COVID-19. However, the value of CT screening in low prevalence areas has scarcely been reported. Methods: This report examines the diagnostic efficacy of CT screening before admission to a hospital in Tokyo. We conducted a retrospective analysis at Keio University Hospital from April 6, 2020, through May 29, 2020. We set up an outpatient screening clinic on April 6 for COVID-19, administering both PCR with nasopharyngeal swabs and chest CT for all patients scheduled for surgery under general anesthesia. Results: A total of 292 asymptomatic patients were included in this study. There were three PCR-positive patients, and they all had negative CT findings, which revealed that both the sensitivity and positive predictive value of CT (PPV) were 0%. There were nine CT-positive patients; the specificity and the negative predictive value (NPV) were 96.9% and 98.9%, respectively. Conclusion: CT screening was not useful in low prevalence areas at this time in Tokyo, even with the inclusion of the most prevalent phase. Given that the utility of CT screening depends on disease prevalence, the criteria for performing CT screening based on the prevalence of COVID-19 should be established.

  • A diagnostic strategy for Parkinsonian syndromes using quantitative indices of DAT SPECT and MIBG scintigraphy: an investigation using the classification and regression tree analysis

    Iwabuchi Y., Kameyama M., Matsusaka Y., Narimatsu H., Hashimoto M., Seki M., Ito D., Tabuchi H., Yamada Y., Jinzaki M.

    European Journal of Nuclear Medicine and Molecular Imaging (European Journal of Nuclear Medicine and Molecular Imaging)  2021年

    ISSN  16197070

     概要を見る

    Purpose: We aimed to evaluate the diagnostic performances of quantitative indices obtained from dopamine transporter (DAT) single-photon emission computed tomography (SPECT) and I-metaiodobenzylguanidine (MIBG) scintigraphy for Parkinsonian syndromes (PS) using the classification and regression tree (CART) analysis. Methods: We retrospectively enrolled 216 patients with or without PS, including 80 without PS (NPS) and 136 with PS [90 Parkinson’s disease (PD), 21 dementia with Lewy bodies (DLB), 16 progressive supranuclear palsy (PSP), and 9 multiple system atrophy (MSA). The striatal binding ratio (SBR), putamen-to-caudate ratio (PCR), and asymmetry index (AI) were calculated using DAT SPECT. The heart-to-mediastinum uptake ratio (H/M) based on the early (H/M [Early]) and delayed (H/M [Delay]) images and cardiac washout rate (WR) were calculated from MIBG scintigraphy. The CART analysis was used to establish a diagnostic decision tree model for differentiating PS based on these quantitative indices. Results: The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy were 87.5, 96.3, 93.3, 92.9, and 93.1 for NPS; 91.1, 78.6, 75.2, 92.5, and 83.8 for PD; 57.1, 95.9, 60.0, 95.4, and 92.1 for DLB; and 50.0, 98.0, 66.7, 96.1, and 94.4 for PSP, respectively. The PCR, WR, H/M (Delay), and SBR indices played important roles in the optimal decision tree model, and their feature importance was 0.61, 0.22, 0.11, and 0.05, respectively. Conclusion: The quantitative indices showed high diagnostic performances in differentiating NPS, PD, DLB, and PSP, but not MSA. Our findings provide useful guidance on how to apply these quantitative indices in clinical practice. 123

  • Extraction of lung and lesion regions from COVID-19 CT volumes using 3D fully convolutional networks

    Hayashi Y., Oda M., Shen C., Hashimoto M., Otake Y., Akashi T., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE (Progress in Biomedical Optics and Imaging - Proceedings of SPIE)  11597 2021年

    ISSN  9781510640238

     概要を見る

    This paper presents a method for extracting the lung and lesion regions from COVID-19 CT volumes using 3D fully convolutional networks. Due to the pandemic of coronavirus disease 2019 (COVID-19), computer aided diagnosis (CAD) system for COVID-19 using CT volume is required. In the development of CAD system, it is important to extract patient anatomical structures in CT volume. Therefore, we develop a method for extracting the lung and lesion regions from COVID-19 CT volumes for the CAD system of COVID-19. We use 3D U-Net type fully convolutional network (FCN) for extraction of the lung and lesion regions. We also use transfer learning to train the 3D U-Net type FCN using the limited data of COVID-19 CT volume. As pre-training, the proposed method trains the 3D U-Net model using abdominal multi-organ regions segmentation dataset which contains a large number of annotated CT volumes. After pre-training, we train the 3D U-Net model from the pre-trained model using a small number of annotated COVID-19 CT volumes. The experimental results showed that the proposed method could extract the lung and lesion regions from COVID-19 CT volumes.

  • Unsupervised segmentation of COVID-19 infected lung clinical CT volumes using image inpainting and representation learning

    Zheng T., Oda M., Wang C., Moriya T., Hayashi Y., Otake Y., Hashimoto M., Akashi T., Mori M., Takabatake H., Natori H., Mori K.

    Progress in Biomedical Optics and Imaging - Proceedings of SPIE (Progress in Biomedical Optics and Imaging - Proceedings of SPIE)  11596 2021年

    ISSN  9781510640214

     概要を見る

    This paper newly proposes a segmentation method of infected area for COVID-19 (Coronavirus Disease 2019) infected lung clinical CT volumes. COVID-19 spread globally from 2019 to 2020, causing the world to face a globally health crisis. It is desired to estimate severity of COVID-19, based on observing the infected area segmented from clinical computed tomography (CT) volume of COVID-19 patients. Given the lung field from a COVID-19 lung clinical CT volume as input, we desire an automated approach that could perform segmentation of infected area. Since labeling infected area for supervised segmentation needs a lot of labor, we propose a segmentation method without labeling of infected area. Our method refers to a baseline method utilizing representation learning and clustering. However, the baseline method is likely to segment anatomical structures with high H.U. (Houns field) intensity such as blood vessel into infected area. Aiming to solve this problem, we propose a novel pre-processing method that could transform high intensity anatomical structures into low intensity structures. This pre-processing method avoids high intensity anatomical structures to be mis-segmented into infected area. Given the lung field extracted from a CT volume, our method segment the lung field into normal tissue, ground GGO (ground glass opacity), and consolidation. Our method consists of three steps: 1) pulmonary blood vessel segmentation, 2) image inpainting of pulmonary blood vessel based on blood vessel segmentation result, and 3) segmentation of infected area. Compared to the baseline method, experimental results showed that our method contributes to the segmentation accuracy, especially on tubular structures such as blood vessels. Our method improved normalized mutual information score from 0.280 (the baseline method) to 0.394.

  • Performance of a deep learning-based identification system for esophageal cancer from CT images

    Takeuchi M., Seto T., Hashimoto M., Ichihara N., Morimoto Y., Kawakubo H., Suzuki T., Jinzaki M., Kitagawa Y., Miyata H., Sakakibara Y.

    Esophagus (Esophagus)  2021年

    ISSN  16129059

     概要を見る

    Background: Because cancers of hollow organs such as the esophagus are hard to detect even by the expert physician, it is important to establish diagnostic systems to support physicians and increase the accuracy of diagnosis. In recent years, deep learning-based artificial intelligence (AI) technology has been employed for medical image recognition. However, no optimal CT diagnostic system employing deep learning technology has been attempted and established for esophageal cancer so far. Purpose: To establish an AI-based diagnostic system for esophageal cancer from CT images. Materials and methods: In this single-center, retrospective cohort study, 457 patients with primary esophageal cancer referred to our division between 2005 and 2018 were enrolled. We fine-tuned VGG16, an image recognition model of deep learning convolutional neural network (CNN), for the detection of esophageal cancer. We evaluated the diagnostic accuracy of the CNN using a test data set including 46 cancerous CT images and 100 non-cancerous images and compared it to that of two radiologists. Results: Pre-treatment esophageal cancer stages of the patients included in the test data set were clinical T1 (12 patients), clinical T2 (9 patients), clinical T3 (20 patients), and clinical T4 (5 patients). The CNN-based system showed a diagnostic accuracy of 84.2%, F value of 0.742, sensitivity of 71.7%, and specificity of 90.0%. Conclusions: Our AI-based diagnostic system succeeded in detecting esophageal cancer with high accuracy. More training with vast datasets collected from multiples centers would lead to even higher diagnostic accuracy and aid better decision making.

全件表示 >>

研究発表 【 表示 / 非表示

  • Investigating the Time to Distance Measurement in Ultrasonography and Supporting it with Deep Learning Techniques

    Masahiro Hashimoto, Yurie Kanauchi, Naoki Toda, Haque Hasnine, Takumi Seto, Yasufumi Sakakibara, Masahiro Jinzaki

    18th AOCR 2021 Spring in YOKOHAMA, 

    2021年04月

    口頭発表(一般)

  • 深層学習を用いた超音波画像におけるランドマーク位置予測

    金内友里恵,橋本正弘,東田直樹,陣崎雅弘,榊原康文

    日本医用画像人工知能研究会, 

    2020年11月

    口頭発表(一般)

  • 深層学習を用いた食道癌CT画像の分類

    瀬戸卓弥,竹内優志,橋本正弘,陣崎雅弘,北川雄光,榊原康文

    日本医用画像人工知能研究会, 

    2020年11月

    口頭発表(一般)

  • 読影業務における SYNAPSE SAI viewerの初期使用経験

    橋本正弘

    第56回日本医学放射線学会秋期臨床大会, 

    2020年10月

    公開講演,セミナー,チュートリアル,講習,講義等

  • 画像診断領域におけるAI

    橋本正弘

    第39回 画像医学会 学術集会 , 

    2020年02月

    シンポジウム・ワークショップ パネル(指名)

全件表示 >>

競争的研究費の研究課題 【 表示 / 非表示

  • 腎腫瘍画像診断における診断根拠提示可能なAI開発

    2020年04月
    -
    2023年03月

    文部科学省・日本学術振興会, 科学研究費助成事業, 橋本 正弘, 若手研究, 補助金,  研究代表者

 

担当授業科目 【 表示 / 非表示

  • 放射線医学講義

    2024年度

  • 放射線医学講義

    2023年度

  • 放射線医学講義

    2022年度

  • 放射線医学講義

    2021年度

  • 放射線医学講義

    2020年度

全件表示 >>

 

所属学協会 【 表示 / 非表示

  • 日本インターベンショナルラジオロジー学会, 

    2008年04月
    -
    継続中
  • 日本医学放射線学会, 

    2008年04月
    -
    継続中
  • 日本医療情報学会

     
  • 日本核医学会

     
  • 日本超音波医学会