曾锦山(特聘教授,博士,副院长)

发布时间: 2017-03-09 浏览次数: 12584


曾锦山,男,江西师范大学特聘教授,计算机信息工程学院副院长,硕士生导师,江西师范大学语言空间信息科学研究中心副主任,江西省首批省级人才入选者,江西省大数据专家,国家东南天元数学中心青年委员,国际期刊Frontiers in Applied Mathematics and Statistics副主编。(Email: jinshanzeng@jxnu.edu.cn;办公室:先骕楼四区4408


教育经历:

(1) 2010-09 2015-06, 西安交通大学, 数学, 博士,导师:徐宗本院士

(2) 2013-11 2014-11, 加州大学洛杉矶分校, 数学, 联合培养博士,导师:印卧涛教授

(3) 2008-09 2010-06, 西安交通大学, 应用数学, 硕士,导师:徐宗本院士

(4) 2004-09 2008-06, 西安交通大学, 信息与计算科学, 学士


研究工作经历:

(1) 2015-07 至今, 江西师范大学,计算机信息工程学院,特聘教授

(2) 2019-09 2020-09, 香港城市大学,数据科学学院,访问学者

(3) 2018-08 2019-02, 香港科技大学,数学系,访问学者

(4) 2017-04 2018-03, 香港科技大学,数学系,访问学者

主持科研项目/课题:

(1) 江西省引进培养创新创业高层次人才“千人计划”, jxsq2019201124, 首批培养类科技创新高端人才(青年)项目, 2020-01 2022-12, 100万元

(2) 国家自然科学基金委员会, 面上项目, 61977038, 深度神经网络训练算法的收敛性与泛化性研究, 2020-01-01 2023-12-31, 60万元

(3) 国家自然科学基金委员会, 青年科学基金项目, 61603162, 有向图上的去中心式一致优化算法及收敛性研究, 2017-01-01 2019-12-31, 20万元


学术奖励及特邀学术报告:

(1) 2020年世界华人数学家联盟最佳论文奖--若琳奖(获奖论文:Jinshan Zeng(曾锦山), Ke Ma, and Yuan Yao, On global linear convergence in stochastic nonconvex optimization for semidefinite programming, IEEE Transactions on Signal Processing, 67(16): 4261-4275, 2019

(2) 2018年世界华人数学家联盟最佳论文奖(获奖论文:Yu Wang, Jinshan Zeng(曾锦山,通讯作者), Zhimin Peng, Xiangyu Chang, and Zongben Xu, Linear convergence of adaptively iterative thresholding algorithms for compressed sensing, IEEE Transactions on Signal Processing, 63(11): 2957-2971, 2015

(3) 2020年世界华人数学家联盟年会45分钟学术报告,2020-12-272020-12-29,中国合肥(报告题目:On ADMM in Deep Learning:Convergence and Saturation-Avoidance


代表性研究成果(部分):

[1] Yong Chen, Wei He, Xi-Le Zhao, Ting-Zhu Huang, Jinshan Zeng, and Hui Lin, Exploring nonlocal group sparsity under transform learning for hyperspectral image denoising, IEEE Transactions on Geoscience and Remote Sensingdoi:10.1109/TGRS.2022.3202359, 2022. (遥感顶刊,SCI二区top

[2] Ke Ma, Qianqian Xu, Jinshan Zeng, Guorong Li,  Xiaochun Cao, Qingming Huang, A tale of HodgeRank and spectral method: Target attack against rank aggregation is the fixed point of adversarial game, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, doi: 10.1109/TPAMI.2022.3190939CCF-A期刊)  

[3] Yanwei Fu, Chen Liu, Donghao Li, Zuyuan Zhong, Xinwei Sun, Jinshan Zeng, Yuan Yao, Exploring structural sparsity of deep networks via inverse scale spaces, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, doi: 10.1109/TPAMI.2022.3168881.CCF-A期刊)

[4]Jinshan Zeng, Wotao Yin, Ding-Xuan Zhou, Moreau envelope augmented Lagrangian method for nonconvex optimization with linear constraints, Journal of Scientific Computing, 91:61,2022.(中国数学会推荐T1类期刊)

[5] Jinshan Zeng, Min Zhang, Shao-Bo Lin, Fully corrective gradient boosting with squared hinge: Fast learning rates and early stopping, Neural Networks, 417:136-151, 2022.SCI二区)

[6] 曾锦山,陈琪,王明文,基于田字格变换的自监督汉字字体生成方法,中国科学: 信息科学,52(1):145-159,2022.

[7] Yong Chen, Jinshan Zeng, Wei He, Xi-Le Zhao, and Ting-Zhu Huang, Hyperspectral and Multispectral Image Fusion Using Factor Smoothed Tensor Ring Decomposition, IEEE Transactions on Geoscience and Remote Sensing, 60:1-17, 2022.(遥感顶刊,SCI二区top

[8] Yong Chen, Ting-Zhu Huang, Wei He, Xi-Le Zhao, Hongyan Zhang, and Jinshan Zeng, Hyperspectral image denoising using factor group sparsity-regularized nonconvex low-rank approximation, IEEE Transactions on Geoscience and Remote Sensing,60:1-16,2022. (遥感顶刊,SCI二区top

[9] Jinshan Zeng, Shao-Bo Lin, Yuan Yao, Ding-Xuan Zhou, On ADMM in deep learning: Convergence and saturation-avoidance, Journal of Machine Learning Research, 22 (199):1-67, 2021. (CCF-A期刊)

[10] Ke Ma, Qianqian Xu, Jinshan Zeng, Xiaochun Cao, Qingming Huang, Poisoning Attack against Estimating from Pairwise Comparisons, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, doi:10.1109/TPAMI.2021.3087514, 08 June 2021. (CCF-A期刊)

[11] Jinshan Zeng, Qi Chen, Yunxin Liu, Mingwen Wang and Yuan Yao, StrokeGAN: Reducing mode collapse in Chinese font generation via stroke encoding, in Association for the Advancement of Artificial Intelligence (AAAI), February 2-9, 2021. (CCF-A会议)

[12] Di Wang, Jinshan Zeng, and Shao-Bo Lin. Random sketching for neural networks with ReLU. IEEE Transactions on Neural Networks and Learning Systems, 31(2): 748-762, 2021. (SCI一区)

[13] Ke Ma, Jinshan Zeng, Jiechao Xiong, Qianqian Xu, Xiaochun Cao, Wei Liu, and Yuan Yao, Fast stochastic ordinal embedding with variance reduction and adaptive step size, IEEE Transactions on Knowledge and Data Engineering, 33(6): 2467-2478, June 2021. (CCF-A期刊)

[14] Yanwei Fu, Chen Liu, Donghao Li, Xinwei Sun, Jinshan Zeng, and Yuan Yao, DessiLBI: Exploring Structural Sparsity of Deep Networks via Differential Inclusion Paths, in Proceedings of the 37th International Conference on Machine Learning (ICML), 2020. (CCF-A会议)

[15] Yu Wang, Wotao Yin, and Jinshan Zeng, Global convergence of ADMM in nonconvex nonsmooth optimization, Journal of Scientific Computing, 78(1):29-63, 2019. ESI高被引论文,中国数学会推荐T1类期刊)

[16] Jinshan Zeng, Ke Ma, and Yuan Yao, On global linear convergence in stochastic nonconvex optimization for semidefinite programming, IEEE Transactions on Signal Processing, 67(16): 4261-4275, 2019. 2020年世界华人数学家联盟最佳论文奖--若琳奖)

[17] Jinshan Zeng, Tim Tsz-Kit Lau, Shao-Bo Lin, and Yuan Yao, Global convergence of block coordinate descent in deep learning, in Proceedings of the 36th International Conference on Machine Learning (ICML), Long Beach, California, PMLR 97: 7313-7323, 2019. (CCF-A会议)

[18] Shaobo Lin, and Jinshan Zeng. Fast learning with polynomial kernel. IEEE Transactions on Cybernetics, 49(10): 3780-3792, October 2019. (SCI一区)

[19] Shaobo Lin, Jinshan Zeng, and Xiaoqin Zhang, Constructive neural network learning, IEEE Transactions on Cybernetics, 49(1): 221-232, January 2019. (SCI一区)

[20] Jinshan Zeng, and Wotao Yin. On nonconvex decentralized gradient descent. IEEE Transactions on Signal Processing. 66(11): 2834-2848, 01 June, 2018. (SCI二区top)

[21] Lin Xu, Shaobo Lin, Jinshan Zeng, Xia Liu, Yi Fang and Zongben Xu, Greedy criterion in orthogonal greedy learning, IEEE Transactions on Cybernetics, 48(3): 955-966, 2018. (SCI一区)

[22] Jinshan Zeng, Ke Ma, and Yuan Yao, Finding global optima in nonconvex stochastic semidefinite optimization with variance reduction, In The 21st International Conference on Artificial Intelligence and Statistics (AISTATS), vol. 84, Lanzarote, Spain, April 9-11, 2018.

[23] Ke Ma, Jinshan Zeng, Jiechao Xiong, Qianqian Xu, Xiaochun Cao, Wei Liu, and Yuan Yao, Stochastic non-convex ordinal embedding with stabilized Barzilai-Borwein step size, In The Thirty-Second AAAI Conference on Artificial Intelligence, pp. 3738-3745, New Orleans, Louisiana, USA, February 2–7, 2018. (CCF-A会议)

[24] Jinshan Zeng, Zhiming Peng, and Shaobo Lin, GAITA: A Gauss-Seidel iterative thresholding algorithm for lq regularized least squares regression, Journal of Computational and Applied Mathematics, 319: 220-235, 2017.(SCI二区top)

[25] Jinshan Zeng, Tao He, Mingwen Wang, A fast proximal gradient algorithm for decentralized composite optimization over directed networks, Systems & Control Letters 107: 36–43, 2017.

[26] Shaobo Lin, Jinshan Zeng, and Xiangyu Chang, Learning rates for classification with Gaussian kernels, Neural Computation, 29: 3353-3380, Dec. 2017.

[27] Jinshan Zeng, and Wotao Yin, ExtraPush for convex smooth decentralized optimization over directed networks, Journal of Computational Mathematics, 35(4): 381-394, June 1, 2017.

[28] Jinshan Zeng, Shaobo Lin, and Zongben Xu, Sparse regularization: Convergence of iterative jumping thresholding algorithm, IEEE Transactions on Signal Processing, 64(19): 5106-5117, 2016. (SCI二区top)

[29] Yu Wang, Jinshan Zeng, Zhimin Peng, Xiangyu Chang, and Zongben Xu, Linear convergence of adaptively iterative thresholding algorithms for compressed sensing, IEEE Transactions on Signal Processing, 63(11): 2957-2971, 2015.  (2018年世界华人数学家联盟最佳论文奖)

[30] Shaobo Lin, Jinshan Zeng , Lin Xu, and Zongben Xu, Jackson-type inequalities for spherical neural networks with doubling weight, Neural Networks, 63: 57-65, March 2015.

[31] Jinshan Zeng, Shaobo Lin, Yao Wang, and Zongben Xu. L1/2 regularization: convergence of iterative Half thresholding algorithm. IEEE Transactions on Signal Processing, 62(9): 2317-2329, 2014. (SCI二区top)

[32] Jinshan Zeng, Shaobo Lin, and Zongben Xu, Sparse solution of underdetermined linear equations via adaptively iterative thresholding, Signal Processing, 97: 152-161, April 2014.

[33] Shaobo Lin, Jinshan Zeng, Jian Fang, and Zongben Xu, Learning rates of lq coefficient regularization learning with Gaussian kernel, Neural Computation, 26(10): 2350-2378, Oct 2014.

[34] Jinshan Zeng, Zongben Xu, Bingchen Zhang, Wen Hong, and Yirong Wu. Accelerated L1/2 regularization based SAR imaging via BCR and reduced Newton skills, Signal Processing, 93(7):1831-1844, July, 2013.

[35] Jinshan Zeng, Jian Fang, and Zongben Xu. Sparse SAR imaging based on L1/2 regularization. Science China-Information Sciences, 55(8): 1755-1775, Aug 2012.

[36] Jian Fang, Jinshan Zeng, Zongben Xu, and Yao Zhao. Efficient DPCA SAR imaging with fast iterative spectrum reconstruction method. Science China-Information Sciences, 55(8): 1838-1851, Aug 2012.


人工智能研究小组介绍:

人工智能研究小组目前包含6位具有较高学术造诣的导师,大部分导师具有海外学术背景及广阔的国际学术视野。研究小组主要聚焦于人工智能中的数学理论、方法及应用研究,主要研究领域如下:

(1) 人工智能中的数学理论与方法:主要研究人工智能领域中算法设计及理论分析

(2) 汉字生成、识别及检测:主要研究基于深度学习的汉字自动生成、识别及检测

(3) 遥感图像处理:主要研究高光谱图像去噪、去模糊、融合等

(4) 自然语言处理:主要研究基于深度学习的多模态情感分析、文本可读性

(5) 医学图像处理:主要研究基于深度学习的医学图像分割

(6) 软件工程:主要研究基于深度学习的代码生成、知识图谱、数字孪生和元宇宙


团队部分导师简介:

陈勇,男,1993年生,202012月博士毕业于电子科技大学数学专业,2018-2019年日本理化学研究所(RIKEN AIP)访问学者。从事于问题驱动的应用数学和计算机学科交叉研究,包括大数据处理与视觉计算建模与高性能算法。截至目前,在其研究领域的重要期刊和会议上发表高水平论文26篇,其中SCI论文21篇,第一作者身份11篇(包括图像处理顶级期刊IEEE TIP,人工智能领域顶级期刊IEEE TCYB,遥感领域顶级期刊ISPRS P&RSIEEE TGRS),通讯作者2篇,ESI高被引论文1篇,主持国家自然科学青年基金1项,参与国家自然科学基金面上项目2项和青年基金1项。目前担任IEEE TGRSIEEE JSTSP, IEEE JSTARS等主流SCI期刊的审稿人。详见个人主页:

Githubhttps://chenyong1993.github.io/yongchen.github.io/

ResearchGatehttps://www.researchgate.net/profile/Yong-Chen-10

Google Scholar: https://scholar.google.com/citations?user=lds7VxMAAAAJ&hl=cs


研究生招生:

欢迎具有扎实的数学和计算机学科专业基础,热爱科研,具备勤奋、刻苦、乐观的学习态度和良好的团队合作精神的同学加入课题组。


附录一:

如果您有兴趣到人工智能研究组攻读学位,请耐心阅读以下注意事项:

1. 在联系之前,请先认真考虑:

  ■您是否对实验室的研究方向感兴趣?研究工作可能是充满挫折的,如果没有高度的兴趣,可能会很痛苦。

  ■您是否有较大的经济负担? 一边要为生计发愁,一边要全身心地投入研究工作,对常人来说基本上是不可能的。因此,如果您有较大的经济负担,进入研究组可能不是一个好的选择。

2. 在实验室选择未来的硕士生时,优秀的本科成绩(特别是数学类课程的成绩)会有较好的影响。一般来说,您最好具有较好的数学基础、较好的编程能力(PythonMATLABJAVAC/C++)、较好的英文水平(能够不太困难地阅读专业文献);此外,您最好乐观开朗、积极主动,有坚韧不拔的毅力,思维清晰、逻辑性强,具有良好的表达能力。

3. 在实验室选择未来的博士生时,研究背景和可塑性将有决定性的影响。一般来说,您最好已经参与过比较前沿的研究工作、对某个研究内容有较深入的理解、具有良好的发表记录、能够较流畅地撰写英文论文、能够较自如地做研究报告,并且对实验室的研究工作有较清楚的认识。