Dr Lin Weisi is an active researcher in image processing, perception-based signal modelling and assessment, video compression, and multimedia communication systems. In the said areas, he has published 180+ international journal papers and 230+ international conference papers, 7 patents, 9 book chapters, 2 authored books and 3 edited books, as well as excellent track record in leading and delivering more than 10 major funded projects (with over S$7m research funding). He earned his BSc and MSc from Sun Yat-Sen University, China, and Ph.D from King’s College, University of London. He had been the Lab Head, Visual Processing, Institute for Infocomm Research (I2R). He is a Professor in School of Computer Science and Engineering, Nanyang Technological University, where he served as the Associate Chair (Graduate Studies) in 2013-2014.
He is a Fellow of IEEE and IET, and an Honorary Fellow of Singapore Institute of Engineering Technologists. He has been elected as a Distinguished Lecturer in both IEEE Circuits and Systems Society (2016-17) and Asia-Pacific Signal and Information Processing Association (2012-13), and given keynote/invited/tutorial/panel talks to 20+ international conferences during the past 10 years. He has been an Associate Editor for IEEE Trans. on Image Processing, IEEE Trans. on Circuits and Systems for Video Technology, IEEE Trans. on Multimedia, IEEE Signal Processing Letters, Quality and User Experience, and Journal of Visual Communication and Image Representation. He was also the Guest Editor for 7 special issues in international journals, and chaired the IEEE MMTC QoE Interest Group (2012-2014); he has been a Technical Program Chair for IEEE Int’l Conf. Multimedia and Expo (ICME 2013), International Workshop on Quality of Multimedia Experience (QoMEX 2014), International Packet Video Workshop (PV 2015), Pacific-Rim Conf. on Multimedia (PCM 2012) and IEEE Visual Communications and Image Processing (VCIP 2017). He believes that good theory is practical, and has delivered 10 major systems and modules for industrial deployment with the technology developed.
As a result of several million years of evolution, humans have developed unique characteristics of perception to the outside world with 5 senses (sight, hearing, smell, touch and taste). There are at least two good reasons to make machines we build perceive signals as humans do: 1) the goal of artificial intelligence (AI) is to mimics human capabilities, such as learning and problem solving; 2) There is an increasing need for harmonious human-machine interaction (in the near future we may have to deal with robots acting as colleagues, salespersons or care-givers to our senior citizens).
In this talk, the major problems and research progress in perceptaul signal processing, i.e., to process signals in the same way as humans perceive them, will be first introduced. The relevant computational models (for perceptual signal decomposition, human attention determination, just-noticeable differences (JND), and perceptual signal quality/experience assessment) will be then discussed, inclusive of the related machine–learning based ones. So far, the majority of research has been performed on visual signals (mainly images and video), with limited work on speech and audio, while exploration on olfaction, haptics and gustation is just emerging. The talk is to highlight possible AI application scenarios, based upon the presenter’s substantial experience in related academic and industrial projects. The presentation aims to trigger exploration into R&D opportunities along the direction with AI in the big data era.