Aoki, Yoshimitsu

写真a

Affiliation

Faculty of Science and Technology, Department of Electronics and Electrical Engineering (Yagami)

Position

Professor

Related Websites

Remarks

Professor

External Links

Profile Summary 【 Display / hide

  • ・1999年04月-2001年03月 早稲田大学理工学部 応用物理学科助手  橋本周司教授の研究室において、顔画像認識・合成、工業用精密画像計測、  ヒューマノイドロボットの視覚システムに関する研究に従事. ・2002年04月-2005年03月 芝浦工業大学工学部情報工学科 専任講師(青木研究室発足)  2005年04月-2008年3月 芝浦工業大学工学部情報工学科 准教授  顔形状・動作の3次元画像解析技術の医学・歯学応用  衛星画像他リモートセンシングデータの統合活用に関する研究  道路交通画像システム,高精度画像計測システムに関する研究等に従事.  ※芝浦工業大学にて、7年間で約90名の学生の研究指導を担当 ・2008年04月-現在 慶應義塾大学理工学部電子工学科 准教授  人物を対象とした画像計測・認識技術、及び応用システムに関する研究.  応用先として,セキュリティ,マーケティング,医療・福祉,美容,インターフェース,エンターテイメント,自動車,等を視野に入れ,幅広い産業応用を目指す.  人の認知機構や感性を考慮したメディア理解技術とその応用,新しい視覚センサ,ロバスト画像特徴量に関する研究等に従事. ・2013年2月-現在 株式会社イデアクエスト 取締役兼任  慶應理工発画像センシング技術の医療分野での実用化を目指している.

Career 【 Display / hide

  • 1999.04
    -
    2002.03

    早稲田大学, 理工学部 , 助手

  • 2002.04
    -
    2005.03

    芝浦工業大学 , 工学部 情報工学科, 専任講師

  • 2005.04
    -
    2008.03

    芝浦工業大学, 工学部 情報工学科, 助教授(2007より准教授)

  • 2008.04
    -
    2017.03

    慶應義塾大学, 理工学部, 准教授

  • 2013.02
    -
    2017.03

    株式会社イデアクエスト, 取締役

display all >>

Academic Background 【 Display / hide

  • 1996.03

    Waseda University, Faculty of Science and Engineering, 応用物理学科

    University, Graduated

  • 1998.03

    Waseda University, Graduate School, Division of Science and Engineering, 物理学及応用物理学専攻

    Graduate School, Completed, Master's course

  • 2001.02

    Waseda University, Graduate School, Division of Science and Engineering, 物理学及応用物理学専攻

    Graduate School, Completed, Doctoral course

Academic Degrees 【 Display / hide

  • 博士(工学), Waseda University, Coursework, 2001.02

 

Research Areas 【 Display / hide

  • Manufacturing Technology (Mechanical Engineering, Electrical and Electronic Engineering, Chemical Engineering) / Measurement engineering (Measurement Engineering)

  • Informatics / Database (Media Informatics/Data Base)

  • Informatics / Perceptual information processing (Perception Information Processing/Intelligent Robotics)

  • Life Science / Medical systems (Medical Systems)

 

Books 【 Display / hide

  • 顔の百科事典

    丸善出版, 2015.09

    Scope: 7 章 コンピュータと顔 ─顔の情報学─

     View Summary

    顔を見ない日はないというくらい、「顔」は私達にとってあたり前の存在ですが、私達は一体どれほど「顔」のことを知っているのでしょうか。そのような「顔」を総合的に研究するのが「顔学」です。 顔学には、動物学や人類学をはじめ、解剖学、生理学、歯学、心理学、社会学の文化的な対象として扱われるだけでなく、演劇や美術などの芸術学、コンピュータの分野では、情報学、さらに、美容学、人相学など、実に多様な学問分野と関係しています。 本書では、私達と切り離すことのできない「顔」の、歴史的・文化的・社会的・科学的側面を中項目の事典としてまとめられていることにより、多様な分野を横断する知識にも容易にアクセスが可能になっています。 日本顔学会創立20周年記念出版として、「顔学」について体系化を行った、初めての百科事典です。

  • 三次元画像センシングの新展開

    AOKI Yoshimitsu, NTS, 2015.05

    Scope: 第5章1節 色情報とレンジデータのフュージョンによる高分解能三次元レンジセンサの開発

  • 電気学会125年史

    AOKI Yoshimitsu, 電気学会, 2013.05

  • 電気学会125年史

    AOKI Yoshimitsu, 電気学会, 2013.05

  • マシンビジョン・画像検査のための画像処理入門

    AOKI Yoshimitsu, 日本工業出版, 2012.10

    Scope: pp.36-39

Papers 【 Display / hide

  • Boosting Semantic Segmentation by Conditioning the Backbone with Semantic Boundaries

    Ishikawa H., Aoki Y.

    Sensors (Sensors)  23 ( 15 )  2023.08

    ISSN  14248220

     View Summary

    In this paper, we propose the Semantic-Boundary-Conditioned Backbone (SBCB) framework, an effective approach to enhancing semantic segmentation performance, particularly around mask boundaries, while maintaining compatibility with various segmentation architectures. Our objective is to improve existing models by leveraging semantic boundary information as an auxiliary task. The SBCB framework incorporates a complementary semantic boundary detection (SBD) task with a multi-task learning approach. It enhances the segmentation backbone without introducing additional parameters during inference or relying on independent post-processing modules. The SBD head utilizes multi-scale features from the backbone, learning low-level features in early stages and understanding high-level semantics in later stages. This complements common semantic segmentation architectures, where features from later stages are used for classification. Extensive evaluations using popular segmentation heads and backbones demonstrate the effectiveness of the SBCB. It leads to an average improvement of (Formula presented.) in IoU and a (Formula presented.) gain in the boundary F-score on the Cityscapes dataset. The SBCB framework also improves over- and under-segmentation characteristics. Furthermore, the SBCB adapts well to customized backbones and emerging vision transformer models, consistently achieving superior performance. In summary, the SBCB framework significantly boosts segmentation performance, especially around boundaries, without introducing complexity to the models. Leveraging the SBD task as an auxiliary objective, our approach demonstrates consistent improvements on various benchmarks, confirming its potential for advancing the field of semantic segmentation.

  • Outdoor Vision-and-Language Navigation Needs Object-Level Alignment

    Sun Y., Qiu Y., Aoki Y., Kataoka H.

    Sensors (Sensors)  23 ( 13 )  2023.07

    ISSN  14248220

     View Summary

    In the field of embodied AI, vision-and-language navigation (VLN) is a crucial and challenging multi-modal task. Specifically, outdoor VLN involves an agent navigating within a graph-based environment, while simultaneously interpreting information from real-world urban environments and natural language instructions. Existing outdoor VLN models predict actions using a combination of panorama and instruction features. However, these methods may cause the agent to struggle to understand complicated outdoor environments and ignore the details in the environments to fail to navigate. Human navigation often involves the use of specific objects as reference landmarks when navigating to unfamiliar places, providing a more rational and efficient approach to navigation. Inspired by this natural human behavior, we propose an object-level alignment module (OAlM), which guides the agent to focus more on object tokens mentioned in the instructions and recognize these landmarks during navigation. By treating these landmarks as sub-goals, our method effectively decomposes a long-range path into a series of shorter paths, ultimately improving the agent’s overall performance. In addition to enabling better object recognition and alignment, our proposed OAlM also fosters a more robust and adaptable agent capable of navigating complex environments. This adaptability is particularly crucial for real-world applications where environmental conditions can be unpredictable and varied. Experimental results show our OAlM is a more object-focused model, and our approach outperforms all metrics on a challenging outdoor VLN Touchdown dataset, exceeding the baseline by 3.19% on task completion (TC). These results highlight the potential of leveraging object-level information in the form of sub-goals to improve navigation performance in embodied AI systems, paving the way for more advanced and efficient outdoor navigation.

  • A Fast Geometric Regularizer to Mitigate Event Collapse in the Contrast Maximization Framework

    Shiba S., Aoki Y., Gallego G.

    Advanced Intelligent Systems (Advanced Intelligent Systems)  5 ( 3 )  2023.03

     View Summary

    Event cameras are emerging vision sensors and their advantages are suitable for various applications such as autonomous robots. Contrast maximization (CMax), which provides state-of-the-art accuracy on motion estimation using events, may suffer from an overfitting problem called event collapse. Prior works are computationally expensive or cannot alleviate the overfitting, which undermines the benefits of the CMax framework. A novel, computationally efficient regularizer based on geometric principles to mitigate event collapse is proposed. The experiments show that the proposed regularizer achieves state-of-the-art accuracy results, while its reduced computational complexity makes it two to four times faster than previous approaches. To the best of our knowledge, this regularizer is the only effective solution for event collapse without trading off the runtime. It is hoped that this work opens the door for future applications that unlocks the advantages of event cameras. Project page: https://github.com/tub-rip/event_collapse.

  • Efficient Transformer-Based Compressed Video Modeling via Informative Patch Selection

    Suzuki T., Aoki Y.

    Sensors (Sensors)  23 ( 1 )  2023.01

    ISSN  14248220

     View Summary

    Recently, Transformer-based video recognition models have achieved state-of-the-art results on major video recognition benchmarks. However, their high inference cost significantly limits research speed and practical use. In video compression, methods considering small motions and residuals that are less informative and assigning short code lengths to them (e.g., MPEG4) have successfully reduced the redundancy of videos. Inspired by this idea, we propose Informative Patch Selection (IPS), which efficiently reduces the inference cost by excluding redundant patches from the input of the Transformer-based video model. The redundancy of each patch is calculated from motions and residuals obtained while decoding a compressed video. The proposed method is simple and effective in that it can dynamically reduce the inference cost depending on the input without any policy model or additional loss term. Extensive experiments on action recognition demonstrated that our method could significantly improve the trade-off between the accuracy and inference cost of the Transformer-based video model. Although the method does not require any policy model or additional loss term, its performance approaches that of existing methods that do require them.

  • Non-Deep Active Learning for Deep Neural Networks

    Kawano Y., Nota Y., Mochizuki R., Aoki Y.

    Sensors (Sensors)  22 ( 14 )  2022.07

    ISSN  14248220

     View Summary

    One way to improve annotation efficiency is active learning. The goal of active learning is to select images from many unlabeled images, where labeling will improve the accuracy of the machine learning model the most. To select the most informative unlabeled images, conventional methods use deep neural networks with a large number of computation nodes and long computation time, but we propose a non-deep neural network method that does not require any additional training for unlabeled image selection. The proposed method trains a task model on labeled images, and then the model predicts unlabeled images. Based on this prediction, an uncertainty indicator is generated for each unlabeled image. Images with a high uncertainty index are considered to have a high information content, and are selected for annotation. Our proposed method is based on a very simple and powerful idea: select samples near the decision boundary of the model. Experimental results on multiple datasets show that the proposed method achieves higher accuracy than conventional active learning methods on multiple tasks and up to 14 times faster execution time from (Formula presented.) s to (Formula presented.) s. The proposed method outperforms the current SoTA method by 1% accuracy on CIFAR-10.

display all >>

Papers, etc., Registered in KOARA 【 Display / hide

Reviews, Commentaries, etc. 【 Display / hide

  • 密集領域での動作を理解するためのハイブリッド型映像解析

    大内一成,小林大祐,中州俊信,青木義満

    東芝レビュー (東芝)  72 ( 4 ) 30 - 34 2017.09

    Internal/External technical report, pre-print, etc., Joint Work

  • 画像センシング技術によるチームスポーツ映像からのプレー解析

    林 昌希,青木 義満

    映像情報メディア学会誌 (映像情報メディア学会)  70 ( 5 ) 710 - 714 2016.09

    Article, review, commentary, editorial, etc. (scientific journal), Joint Work

  • Image Sensing Technologies and its Applications for Human Action Recognition

    AOKI Yoshimitsu

    Journal of JSNDI (日本非破壊検査協会)  65 ( 6 ) 254 - 260 2016.06

    Article, review, commentary, editorial, etc. (scientific journal), Single Work

  • パターン計測技術の深化と広がる産業応用 -総論-

    AOKI Yoshimitsu

    計測と制御 (SICE)  53 ( 7 ) 555 - 556 2014.07

    Article, review, commentary, editorial, etc. (scientific journal), Single Work

Presentations 【 Display / hide

  • 自由な表現と被写体の質感を維持するメイク生成モデルの開発

    帯金駿, 田川晴菜, 中川雄介, 中村理恵, 青木義満

    第27回日本顔学会大会(フォーラム顔学2022), 

    2022.09

    Oral presentation (general)

  • 不確実性を考慮したセマンティックマップの生成

    竹中悠,森巧磨,谷口恭弘,青木義満

    第27回 知能メカトロニクスワークショップ, 

    2022.09

    Oral presentation (general)

  • 重要パッチ選択に基づく効率的動画認識

    鈴木 智之, 青木 義満

    第25回 画像の認識・理解シンポジウム(MIRU2022), 

    2022.07

    Poster presentation

  • 音響信号を用いた人物の3次元姿勢推定

    川島穣, 柴田優斗, 五十川麻理子, 入江豪, 木村昭悟, 青木義満

    第25回 画像の認識・理解シンポジウム(MIRU2022), 

    2022.07

    Oral presentation (general)

  • 完全合成画像での学習による文書画像の影除去

    松尾祐飛,青木義満

    第28回画像センシングシンポジウム(SSII2022), 

    2022.06

    Poster presentation

display all >>

Intellectual Property Rights, etc. 【 Display / hide

  • 画像処理装置,画像処理プログラムおよび画像処理方法

    Date applied: 2019-105297  2019.06 

    Joint

  • 危険度推定装置,危険度推定方法及び危険度推定用コンピュータプログラム

    Date applied: 特願2015-005241  2015.01 

    Date issued: 特許第6418574号  2018.10

    Patent, Joint

Awards 【 Display / hide

  • HCGシンポジウム2018 特集テーマセッション賞

    秋月 秀一(慶大)・大木 美加・バティスト ブロー・鈴木 健嗣(筑波大)・青木 義満(慶大), 2018.12, 電子情報通信学会ヒューマンコミュニケーショングループ, 床面プロジェクションに伴う動的な環境変化に対応する人物追跡技術

    Type of Award: Award from Japanese society, conference, symposium, etc.

  • HCGシンポジウム2018 優秀インタラクティブ発表賞

    秋月 秀一(慶大)・大木 美加・バティスト ブロー・鈴木 健嗣(筑波大)・青木 義満(慶大), 2018.12, 電子情報通信学会ヒューマンコミュニケーショングループ, 床面プロジェクションに伴う動的な環境変化に対応する人物追跡技術

    Type of Award: Award from Japanese society, conference, symposium, etc.

  • 精密工学会沼田記念論文賞

    加藤直樹,箱崎浩平,里雄二,古山純子,田靡雅基,青木ヨシミツ, 2018.03, 精密工学会, 畳み込みニューラルネットワークによる距離学習を用いた動画像人物再同定

    Type of Award: Award from Japanese society, conference, symposium, etc.

  • IWAIT2018 Best Paper Award

    Ryunosuke Kurose, Masaki Hayashi, Yoshimitsu Aoki, 2018.01, IWAIT2018

    Type of Award: International academic award (Japan or overseas)

  • IES-KCIC2017 Best Paper Award

    Siti Nor Khuzaimah Amit, Yoshimitsu Aoki, 2017.09, IEEE Indonesia Section, Disaster Detection from Aerial Imagery with Convolutional Neural Network

    Type of Award: International academic award (Japan or overseas)

display all >>

 

Courses Taught 【 Display / hide

  • SEMINOR IN ELECTRONICS AND INFOTMATION ENGINEERING(2)

    2024

  • RECITATION IN ELECTRONICS AND INFORMATION ENGINEERING

    2024

  • LABORATORIES IN ELECTRONICS AND INFORMATION ENGINEERING(2)

    2024

  • INDEPENDENT STUDY ON INTEGRATED DESIGN ENGINEERING

    2024

  • IMAGING SCIENCE AND TECHNOLOGY

    2024

display all >>

 

Social Activities 【 Display / hide

  • 画像情報教育振興協会

    2013.07
    -
    2015.03
  • 独立行政法人 交通安全環境研究所

    2009.12
    -
    2012.03

Memberships in Academic Societies 【 Display / hide

  • International Symposium on Optomechatronic Technologies 2013, 

    2013.04
    -
    2013.11
  • International Workshop on Advanced Image Technology 2013(IWAIT2013), 

    2013.01
    -
    2013.09
  • 11th International Conference on Quality Control by Artificial Vision(QCAV2013), 

    2012.12
    -
    2013.05
  • 3rd International Conference on 3D Body Scanning Technologies, 

    2012.06
    -
    2012.10
  • 計測自動制御学会パターン計測部会, 

    2012.04
    -
    Present

display all >>

Committee Experiences 【 Display / hide

  • 2017.04
    -
    Present

    NEDO技術委員, NEDO

  • 2016.07
    -
    2016.11

    Optics & Photonics Japan 2016 推進委員, 日本光学会

  • 2016.07
    -
    2016.12

    Program committee member, International Workshop on Human Tracking and Behavior Analysis 2016

  • 2015.09
    -
    2016.08

    第22回画像センシングシンポジウム 実行委員長, 画像センシング技術研究会

  • 2014.09
    -
    2015.08

    第21回画像センシングシンポジウム 実行委員長, 画像センシング技術研究会

display all >>