Yukawa, Masahiro



Faculty of Science and Technology, Department of Electronics and Electrical Engineering (Yagami)



Related Websites

Profile 【 Display / hide

  • 数理的基盤に立脚した新しい信号処理パラダイムの構築を目指しています. 数理科学で蓄積された知見を活用することで,信号処理の諸問題を見通し良く解決することが目的です. 主に,不動点近似・凸解析を利用した適応信号処理アルゴリズム等の研究を行なっています. 新しい時代を切り拓く信号処理技術に繋げたいと考えます.

Career 【 Display / hide

  • 2005.04

    東京工業大学, 日本学術振興会特別研究員 DC2

  • 2006.10

    東京工業大学(英国国立ヨーク大学に留学), 日本学術振興会特別研究員 PD

  • 2007.04

    (独)理化学研究所, 基礎科学特別研究員

  • 2010.04

    新潟大学, 准教授

Academic Background 【 Display / hide

  • 1998.04

    Tokyo Institute of Technology, 工学部, 電気電子工学科

    University, Graduated

  • 2002.04

    Tokyo Institute of Technology, 理工学研究科, 集積システム専攻

    Graduate School, Completed, Master's course

  • 2004.04

    Tokyo Institute of Technology, 理工学研究科, 集積システム専攻

    Graduate School, Completed, Doctoral course

Academic Degrees 【 Display / hide

  • 博士(工学), Tokyo Institute of Technology, Coursework, 2006.09

    A study of efficient adaptive filtering algorithms and their applications to acoustic and communication systems


Papers 【 Display / hide

  • Learning Sparse Graph with Minimax Concave Penalty under Gaussian Markov Random Fields

    Koyakumaru T., Yukawa M., Pavez E., Ortega A.

    IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences (IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences)  E106A ( 1 ) 23 - 34 2023.01

    ISSN  09168508

     View Summary

    This paper presents a convex-analytic framework to learn sparse graphs from data. While our problem formulation is inspired by an extension of the graphical lasso using the so-called combinatorial graph Laplacian framework, a key difference is the use of a nonconvex alternative to the `1 norm to attain graphs with better interpretability. Specifically, we use the weakly-convex minimax concave penalty (the difference between the `1 norm and the Huber function) which is known to yield sparse solutions with lower estimation bias than `1 for regression problems. In our framework, the graph Laplacian is replaced in the optimization by a linear transform of the vector corresponding to its upper triangular part. Via a reformulation relying on Moreau’s decomposition, we show that overall convexity is guaranteed by introducing a quadratic function to our cost function. The problem can be solved efficiently by the primal-dual splitting method, of which the admissible conditions for provable convergence are presented. Numerical examples show that the proposed method significantly outperforms the existing graph learning methods with reasonable computation time.

  • Linearly-Involved Moreau-Enhanced-Over-Subspace Model: Debiased Sparse Modeling and Stable Outlier-Robust Regression

    Yukawa M., Kaneko H., Suzuki K., Yamada I.

    IEEE Transactions on Signal Processing (IEEE Transactions on Signal Processing)  71   1232 - 1247 2023

    ISSN  1053587X

     View Summary

    We present an efficient mathematical framework to derive promising methods that enjoy 'enhanced' desirable properties. The popular minimax concave penalty for sparse modeling subtracts, from the ℓ 1 norm, its Moreau envelope, inducing nearly unbiased estimates and thus yielding considerable performance enhancements. To extend it to underdetermined linear systems, we propose the projective minimax concave penalty, which leads to 'enhanced' sparseness over the input subspace. We also present a promising regression method which has an 'enhanced' robustness and substantial stability by distinguishing outlier and noise explicitly. The proposed framework, named the linearly-involved Moreau-enhanced-over-subspace (LiMES) model, encompasses those two specific examples as well as two others: stable principal component pursuit and robust classification. The LiMES function involved in the model is an 'additively nonseparable' weakly convex function, while the 'inner' objective function to define the Moreau envelope is 'separable'. This mixed nature of separability and nonseparability allows an application of the LiMES model to the underdetermined case with an efficient algorithmic implementation. Two linear/affine operators play key roles in the model: one corresponds to the projection mentioned above and the other takes care of robust regression/classification. A necessary and sufficient condition for convexity of the smooth part of the objective function is studied. Numerical examples show the efficacy of LiMES in applications to sparse modeling and robust regression.

  • Sparse Stable Outlier-Robust Signal Recovery Under Gaussian Noise

    Suzuki K., Yukawa M.

    IEEE Transactions on Signal Processing (IEEE Transactions on Signal Processing)  71   372 - 387 2023

    ISSN  1053587X

     View Summary

    This paper presents a novel framework for sparse robust signal recovery integrating the sparse recovery using the minimax concave (MC) penalty and robust regression called sparse outlier-robust regression (SORR) using the MC loss. While the proposed approach is highly robust against huge outliers, the sparseness of estimates can be controlled by taking into consideration a tradeoff between sparseness and robustness. To accommodate the prior information about additive Gaussian noise and outliers, an auxiliary vector to model the noise is introduced. The remarkable robustness and stability come from the use of the MC loss and the squared \ell {2} penalty of the noise vector, respectively. In addition, the simultaneous use of the MC and squared \ell {2} penalties of the coefficient vector leads to a certain remarkable grouping effect. The necessary and sufficient conditions for convexity of the smooth part of the cost are derived under a certain nonempty-interior assumption via the product space formulation using the linearly-involved Moreau-enhanced-over-subspace (LiMES) framework. The efficacy of the proposed method is demonstrated by simulations in its application to speech denoising under highly noisy environments as well as to toy problems.

  • Relaxed zero-forcing beamformer under temporally-correlated interference

    Kono T., Yukawa M., Piotrowski T.

    Signal Processing (Signal Processing)  190 2022.01

    ISSN  01651684

     View Summary

    The relaxed zero-forcing (RZF) beamformer is a quadratically-and-linearly constrained minimum variance beamformer. The central question addressed in this paper is whether RZF performs better than the widely-used minimum variance distortionless response and zero-forcing beamformers under temporally-correlated interference. First, RZF is rederived by imposing an ellipsoidal constraint that bounds the amount of interference leakage for mitigating the intrinsic gap between the output variance and the mean squared error (MSE) which stems from the temporal correlations. Second, an analysis of RZF is presented for the single-interference case, showing how the MSE is affected by the spatio-temporal correlations between the desired and interfering sources as well as by the signal and noise powers. Third, numerical studies are presented for the multiple-interference case, showing the remarkable advantages of RZF in its basic performance as well as in its application to brain activity reconstruction from EEG data. The analytical and experimental results clarify that the RZF beamformer gives near-optimal performance in some situations.

  • Distributed Sparse Optimization With Weakly Convex Regularizer: Consensus Promoting and Approximate Moreau Enhanced Penalties Towards Global Optimality

    Komuro K., Yukawa M., Cavalcante R.L.G.

    IEEE Transactions on Signal and Information Processing over Networks (IEEE Transactions on Signal and Information Processing over Networks)  8   514 - 527 2022

     View Summary

    We propose a promising framework for distributed sparse optimization based on weakly convex regularizers. More specifically, we pose two distributed optimization problems to recover sparse signals in networks. The first problem formulation relies on statistical properties of the signals, and it uses an approximate Moreau enhanced penalty. In contrast, the second formulation does not rely on any statistical assumptions, and it uses an additional consensus promoting penalty (CPP) that convexifies the cost function over the whole network. To solve both problems, we propose a distributed proximal debiasing-gradient (DPD) method, which uses the exact first-order proximal gradient algorithm. The DPD method features a pair of proximity operators that play complementary roles: one sparsifies the estimate, and the other reduces the bias caused by the sparsification. Owing to the overall convexity of the whole cost functions, the proposed method guarantees convergence to a global minimizer, as demonstrated by numerical examples. In addition, the use of CPP improves the convergence speed significantly.

display all >>

Papers, etc., Registered in KOARA 【 Display / hide

Reviews, Commentaries, etc. 【 Display / hide

  • A New Stream of Nonlinear Adaptive Signal Processing Technique : An Application of Reproducing Kernel


    電子情報通信学会誌 (電子情報通信学会)  97 ( 10 ) 876 - 882 2014.10

    Article, review, commentary, editorial, etc. (scientific journal), Single Work

  • A Variable-Metric NLMS Algorithm for Sparse Systems and Colored Inputs

    Toda Osamu, Yukawa Masahiro, Sasaki Shigenobu

    信号処理シンポジウム講演論文集 ([電子情報通信学会信号処理研究専門委員会])  27   527 - 530 2012.11

    ISSN  1881-4654

Presentations 【 Display / hide

  • Online Learning in L2 Space with Multiple Gaussian Kernels

    Motoya Ohnishi

    European Signal Processing Conference, 


  • Distributed Nonlinear Regression Using In-Network Processing With Multiple Gaussian Kernels

    Ban-Sok Shin

    IEEE International Workshop on Signal Processing Advances in Wireless Communications, 


  • Automatic shrinkage tuning based on a system-mismatch estimate for sparsity-aware adaptive filtering

    Masao Yamagishi, Yukawa Masahiro, and Isao Yamada

    International Conference on Acoustic, Speech and Signal Processing, 


    Oral presentation (general)

  • Projection-based dual averaging for stochastic sparse optimization

    Asahi Ushio and Masahiro Yukawa

    International Conference on Acoustic, Speech and Signal Processing, 


    Oral presentation (general)

  • Complex NMF with the generalized Kullback-Leibler divergence

    Hirokazu Kameoka, Hideaki Kagami, and Masahiro Yukawa

    International Conference on Acoustic, Speech and Signal Processing, 


    Oral presentation (general)

display all >>

Research Projects of Competitive Funds, etc. 【 Display / hide

  • 正則化機能強化による超ロバスト推定法の開拓と一般化:信号処理・機械学習への応用


    MEXT,JSPS, Grant-in-Aid for Scientific Research, 基盤研究(B), Principal investigator

  • 不完全性を持つ非線形多重スケールデータのためのオンライン解析法の開発と実応用


    MEXT,JSPS, Grant-in-Aid for Scientific Research, Grant-in-Aid for Scientific Research (B), Principal investigator

  • 再生核適応フィルタの解析と高性能アルゴリズム開発


    MEXT,JSPS, Grant-in-Aid for Scientific Research, Grant-in-Aid for Scientific Research (C), Principal investigator

Awards 【 Display / hide

  • JSPS Prize

    湯川正裕, 2022.02, 日本学術振興会

    Type of Award: Other

  • 平成26年度科学技術分野の文部科学大臣表彰 若手科学者賞

    湯川正裕, 2014.04, 文部科学省, 新世代情報通信システムのための適応信号処理の研究

     View Description


  • 船井学術賞

    湯川正裕, 2016.04, 公益財団法人船井情報科学振興財団, 新世代情報通信システムのための適応信号処理アルゴリズムの研究

    Type of Award: Award from publisher, newspaper, foundation, etc.

  • KDDI財団賞(優秀研究賞)

    湯川正裕, 2015.03, 公益財団法人KDDI財団, カーネル学習と超高分解能ビームフォーマ法

    Type of Award: Award from publisher, newspaper, foundation, etc.

  • 電気通信普及財団賞テレコムシステム技術賞

    湯川正裕, 2014.03, 公益財団法人電気通信普及財団, Multikernel Adaptive Filtering

    Type of Award: Award from publisher, newspaper, foundation, etc.

     View Description


display all >>


Courses Taught 【 Display / hide











display all >>


Memberships in Academic Societies 【 Display / hide


  • IEEE


Committee Experiences 【 Display / hide

  • 2022.04

    IEEE Transactions on Signal Processing, Senior Area Editor

  • 2015.02

    IEEE Transactions on Signal Processing, Associate Editor, IEEE