Takahashi, Masaki

写真a

Affiliation

Faculty of Science and Technology, Department of System Design Engineering (Yagami)

Position

Professor

Related Websites

External Links

Career 【 Display / hide

  • 2004.04
    -
    2005.03

    慶應義塾大学特別研究助手

  • 2005.04
    -
    2007.03

    慶應義塾大学理工学部システムデザイン工学科助手

  • 2007.04
    -
    2009.03

    慶應義塾大学理工学部システムデザイン工学科助教

  • 2009.04
    -
    2011.03

    慶應義塾大学理工学部システムデザイン工学科専任講師

  • 2010.04
    -
    2011.03

    首都大学東京非常勤講師

display all >>

Academic Degrees 【 Display / hide

  • 博士(工学), Keio University, 2004.03

    非線形システムの統合型ニューラルネットワーク知的制御

 

Research Areas 【 Display / hide

  • Informatics / Mechanics and mechatronics

 

Books 【 Display / hide

  • Driving Control of a Powered Wheelchair by Voluntary Eye Blinking and with Environment Recognition

    Kyohei Okugawa, Masaki Nakanishi, Yasue Mitsukura, TAKAHASHI MASAKI, Trans Tech Publications Inc., 2014

    Scope: 1764-1768

  • Gait Measurement System for the Elderly Using Laser Range Sensor

    Ayanori Yorozu, TAKAHASHI MASAKI, Trans Tech Publications Inc., 2014

    Scope: 1629-1635

  • 3D X-Y-T Space Path Planning for Autonomous Mobile Robot Considering Dynamic Constraints

    Ippei Nishitani, Tetsuya Matsumura, Mayumi Ozawa, Ayanori Yorozu, TAKAHASHI MASAKI, Trans Tech Publications Inc., 2014

    Scope: 1163-1167

  • INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, Lecture Notes in Electrical Engineering

    TAKAHASHI MASAKI, Takafumi Suzuki, Tetsuya Matsumura and Ayanori Yorozu, Springer, 2012

    Scope: 51-64

display all >>

Papers 【 Display / hide

  • Empirical study of future image prediction for image-based mobile robot navigation

    Ishihara Y., Takahashi M.

    Robotics and Autonomous Systems (Robotics and Autonomous Systems)  150 2022.04

    ISSN  09218890

     View Summary

    Recent image-based robotic systems use predicted future state images to control robots. Therefore, the prediction accuracy of the future state image affects the performance of the robot. To predict images, most previous studies assume that the camera captures the entire scene and that the environment is static. However, in real robot applications, these assumptions do not always hold. For example, if a camera is attached to a mobile robot, its view changes from time to time. In this study, we analyzed the relationship between the performance of the image prediction model and the robot's behavior, controlled by an image-based navigation algorithm. Through mobile robot navigation experiments using front-faced and omni-directional cameras, we discussed the capabilities of the image prediction models and demonstrated their performance when applied to the image-based navigation algorithm. Moreover, to adapt to the dynamic changes in the environment, we studied the effectiveness of directing the camera to the ceiling. We showed that robust navigation can be achieved without using images from cameras directed toward the front or the floor, because these views can be disturbed by moving objects in a dynamic environment.

  • Human Leg Tracking by Fusion of Laser Range and Insole Force Sensing With Gaussian Mixture Model-Based Occlusion Compensation

    Eguchi R., Takahashi M.

    IEEE Sensors Journal (IEEE Sensors Journal)  22 ( 4 ) 3704 - 3714 2022.02

    ISSN  1530437X

     View Summary

    Simultaneous measurement of spatial, temporal, and kinetic gait parameters has become important for monitoring patients with neurological disorders in a hospital. As spatial sensing technologies, laser range sensors (LRSs), which obtain highly accurate two-dimensional distance data over a wide range, have been employed to detect/track both legs during walking. However, when walking on curved trajectories, a continuous occlusion occurs over several sampling steps and produces false tracking. To address this issue in combination with temporal and kinetic sensing, this paper presents a fusion system of LRSs and force-sensing insoles and an occlusion compensation based on probabilistic motion models. First, the system pre-tracks a target person during walking along a straight line and curved paths under different curvatures/directions in a situation without occlusion. Gait cycles of each walking type are divided by foot-grounding obtained from the insoles. Relationships between leg trajectories and traveling directions during the gait cycle are then learned using user-specific Gaussian mixture models (GMMs). When an occlusion occurs in post-tracking, a maximum likelihood (ML) GMM is identified using a joint probability of both legs' trajectories, in accordance with biomechanics that both legs move in a coordinated manner. The ML GMM then compensates for the traveling direction and position of the occluded leg, and the system interpolates/re-tracks its positions to correct the states during occlusion. Experimental results demonstrated that the proposed method (fusion with the insoles, occlusion compensation, and interpolation/re-tracking) significantly enhanced tracking performance during occlusion and estimated leg positions with valid accuracy (errors of under 60 mm).

  • Dynamic Object Removal from Unpaired Images for Agricultural Autonomous Robots

    Akada H., Takahashi M.

    Lecture Notes in Networks and Systems (Lecture Notes in Networks and Systems)  412 LNNS   641 - 653 2022

    ISSN  23673370

     View Summary

    Recently, the demand for agricultural autonomous robots has been increasing. Using the technology of vision-based robotic environmental recognition, they can generally follow farmers to support their work activities, such as conveyance of the harvest. However, a major issue arises in that dynamic objects (including humans) often enter images that the robots rely on for environmental recognition tasks. These dynamic objects degrade the performance of image recognition considerably, resulting in collisions with crops or ridges when the robots are following the worker. To address the occlusion issue, generative adversarial network (GAN) solutions can be adopted as they feature a generative capability to reconstruct the area behind dynamic objects. However, precedented GAN methods basically presuppose paired image datasets to train their networks, which are difficult to prepare. Therefore, a method based on unpaired image datasets is desirable in real-world environments, such as a farm. For this purpose, we propose a new approach by integrating the state-of-the-art neural network architecture, CycleGAN, and Mask R CNN. Our system is trained with a human-tracking dataset collected by an agricultural autonomous robot in a farm. We evaluate the performance of our system both qualitatively and quantitatively for the task of human removal in images.

  • Future Image Prediction for Mobile Robot Navigation: Front-Facing Camera Versus Omni-Directional Camera

    Ishihara Y., Takahashi M.

    Lecture Notes in Networks and Systems (Lecture Notes in Networks and Systems)  412 LNNS   654 - 669 2022

    ISSN  23673370

     View Summary

    When we perform a task, we select the action by imagining its future consequences. Hence, the ability to predict future states would also be an essential feature for robotic agents because it would allow them to plan effective actions to accomplish given tasks. In this research, we explore an action-conditioned future image prediction model considering its application to navigation tasks for a mobile robot. We investigate the image prediction performance of deep neural network architectures and training strategies with two different camera systems. One camera system is a conventional front-facing camera that has a narrow field of view with high definition, and the other is an omni-directional camera that has a wide field of view with low definition. We compare the performances of prediction models for these two camera systems, and propose using an image prediction model with the omni-directional camera for the navigation tasks of a robot. We evaluate the prediction performance of each camera system through experiments conducted in a complex living room-like environment. We demonstrate that models with an omni-directional camera system outperform models with a conventional front-facing camera. In particular, the model comprising a combination of action-conditioned long short-term memory successfully predicts future images for states of more than 100 steps ahead in both simulation and real-world scenarios. Further, by integrating the proposed system into image-prediction-based navigation algorithm, we demonstrate that navigation based on a model with an omni-directional camera can successfully navigate the robot in cases where one with a conventional front-facing camera fails.

  • Harmonious Robot Navigation Strategies for Pedestrians

    Nakaoka S., Yorozu A., Takahashi M.

    Lecture Notes in Networks and Systems (Lecture Notes in Networks and Systems)  412 LNNS   98 - 108 2022

    ISSN  23673370

     View Summary

    Recently, the demand for service robots that move autonomously in a crowded environment, such as stations, airports, and commercial facilities has increased. Such robots are required to move to a destination without hindering the progress of the surrounding pedestrians. Previous papers proposed collision avoidance methods based on the direction and distance of the destination of the robot, and the position and velocity of pedestrians in the vicinity. However, in a crowded environment, the behavior of a robot may affect other pedestrians, or the behavior of a pedestrian facing the robot may affect other pedestrians. Moreover, considering the motion strategy used by a pedestrian in a crowded environment, the impact on other pedestrians can be reduced by the robot following the pedestrians moving in the direction in which the robot wants to move. This navigation method has not been proposed thus far. Therefore, in this paper, we propose a method that considers the impact of the robot on surrounding pedestrians and the impact of those pedestrians on other pedestrians. The robot determines the avoidance or following action based on the traveling direction of the surrounding pedestrians.

display all >>

Papers, etc., Registered in KOARA 【 Display / hide

display all >>

Reviews, Commentaries, etc. 【 Display / hide

Presentations 【 Display / hide

  • 多機能型ロボットによるスマート農業

    髙橋 正樹, 石上玄也, 萬礼応

    第5回制御部門マルチシンポジウム (東京都) , 

    2018.03

    Oral presentation (general)

  • Agile Attitude Maneuver via SDRE Controller using SGCMG Integrated Satellite Model

    Ryotaro Ozawa, TAKAHASHI MASAKI

    The 2018 AIAA Science and Technology Forum and Exposition (AIAA SciTech 2018) (Gaylord Palms, Kissimmee, Florida, USA) , 

    2018.01

    Oral presentation (general)

  • Control System Design of a Quadrotor Suppressing the Virtual Reality Sickness

    Kandai Watanabe, TAKAHASHI MASAKI

    The 2018 AIAA Science and Technology Forum and Exposition (AIAA SciTech 2018) (Gaylord Palms, Kissimmee, Florida, USA) , 

    2018.01

    Oral presentation (general)

  • Minimum-time Attitude Maneuver of Small Satellite Mounted with Communication Antenna

    Kota Mori, TAKAHASHI MASAKI

    The 2018 AIAA Science and Technology Forum and Exposition (AIAA SciTech 2018) (Gaylord Palms, Kissimmee, Florida, USA) , 

    2018.01

    Oral presentation (general)

  • Attitude Maneuver and Gimbal Angle Guidance by SDRE Controller using SGCMG Integrated Satellite SDLR Model

    Ryotaro Ozawa, TAKAHASHI MASAKI

    2017 Asian Control Conference (ASCC 2017) (Gold Coast, Australia) , 

    2017.12

    Oral presentation (general)

display all >>

Research Projects of Competitive Funds, etc. 【 Display / hide

  • 自律ロボット導入のための人のマルチモーダル業務分析

    2023.04
    -
    2026.03

    基盤研究(C), Principal investigator

  • パーキンソン病患者の機能的移動能力と認知機能の計測・評価システムの開発

    2016.04
    -
    2020.03

    MEXT,JSPS, Grant-in-Aid for Scientific Research, Grant-in-Aid for Scientific Research (B), Principal investigator

  • 転倒予防計測システム

    2011.04
    -
    2012.03

    村田機械株式会社, Joint research, Principal investigator

  • 可変重力環境下における歩行制御メカニズムの解明と次世代宇宙服のシステムデザイン

    2009.04
    -
    2010.03

    慶應義塾先端科学技術研究センター, 指定研究プロジェクト次世代先端分野探索研究(単年度), Research grant, No Setting

  • 受動要素を併合した2足ロボットの非線形動力学設計と制御

    2008.04
    -
    2009.03

    慶應義塾大学, Keio Gijuku Academic Development Funds, Research grant, No Setting

display all >>

Awards 【 Display / hide

  • 部門貢献表彰

    TAKAHASHI MASAKI, 2015.08, 日本機械学会 機械力学・計測制御部門, 部門貢献表彰

    Type of Award: Award from Japanese society, conference, symposium, etc.

     View Description

    長年機械力学・計測制御部門で活躍しておられ,2013年度(第91期)の部門幹事として部門の活性化に大いに貢献されました。また部門講演会であるDynamics and Design Conference 2014において,幹事としてその開催と運営に尽力され,講演会を盛会に導かれました。

  • The 12th International Conference on Motion and Vibration Control (MOVIC 2014), Best Student Presentation Award

    Takayuki Ishida, Hiroka Inoue, Wataru Mogi, TAKAHASHI MASAKI, Masahiro Ono, Shuichi Adachi, 2014.08, Long-Range Navigation for Resource-Constrained Planetary Rovers using angle of Arrival

    Type of Award: Other

  • 2013 International Conference on Control, Mechatronics and Automation (ICCMA 2013), Excellent Presentation Award

    Ippei Nishitani, Tetsuya Matsumura, Mayumi Ozawa, Ayanori Yorozu, TAKAHASHI MASAKI, 2013.12, 3D X-Y-T Space Path Planning for Autonomous Mobile Robot Considering Dynamic Constraints

    Type of Award: Other

  • International Conference on Intelligent Automation and Robotics (ICIAR'12), Best Student Paper Award

    YOKOYAMA KAZUTO, TAKAHASHI MASAKI, 2012.10, Energy Shaping Nonlinear Acceleration Control for a Mobile Inverted Pendulum with a Slider Mechanism Utilizing Instability

    Type of Award: Other

  • The 12th International Conference on Intelligent Autonomous Systems, Best Poster Paper Award

    AYANORI YOROZU, TAKAFUMI SUZUKI, TETSUYA MATSUMURA, TAKAHASHI MASAKI, 2012.06, The 12th International Conference on Intelligent Autonomous Systems, Simultaneous Control of Translational and Rotational Motion for Autonomous Omniderectional Mobile Robot Considering Shape of the Robot and Movable Area by Height

    Type of Award: International academic award (Japan or overseas)

     View Description

    知的自律システムに関する国際会議において発表した自律移動ロボットにおける環境評価と行動決定に関する研究成果がBest Poster Paper Awardを受賞した。

display all >>

 

Courses Taught 【 Display / hide

  • BASIC EXERCISE ON SYSTEM DESIGN ENGINEERING

    2023

  • BACHELOR'S THESIS

    2023

  • COMPUTER PROGRAMMING EXERCISE

    2023

  • SYSTEMS ENGINEERING

    2023

  • SPECIAL LECTURE ON SPACE AND ENVIRONMENT DESIGN ENGINEERING 1

    2023

display all >>

 

Social Activities 【 Display / hide

  • 特許庁

    2010.02
    -
    2015.11
  • かわさき・神奈川ロボットビジネス協議会

    2009.09
    -
    2010.03

     View Summary

    平成21年度事業化支援会議・生活分野ロボットWG委員主査

Memberships in Academic Societies 【 Display / hide

  • 電気学会, 

    2015.04
    -
    2016.03
  • 一般社団法人日本建築学会, 

    2010.04
    -
    2016.03
  • 日本航空宇宙学会, 

    2010.04
    -
    2016.03
  • 一般社団法人計測自動制御学会, 

    2009.04
    -
    2016.03
  • AIAA: The American Institute of Aeronautics and Astronautics, 

    2008.04
    -
    2016.03

display all >>

Committee Experiences 【 Display / hide

  • 2012.04
    -
    2013.03

    委員, 日本機械学会

     View Remarks

    機械力学・計測制御部門 運営委員 幹事

  • 2011.04
    -
    2012.03

    Committee Member, 日本機械学会

     View Remarks

    機械力学・計測制御部門 運営委員

  • 2010.04
    -
    2012.03

    Committee Member, 日本機械学会

     View Remarks

    宇宙工学部門 運営委員

  • 2010.02
    -
    2012.11

    工業所有権審査会試験委員(特許庁弁理士試験委員), 特許庁

  • 2009.09
    -
    2010.03

    Program Chair, かわさき・神奈川ロボットビジネス協議会

     View Remarks

    平成21年度事業化支援会議・生活分野ロボットWG委員主査

display all >>