Few Words about Myself

I am a PhD Candidate at WINLAB/ECE, Rutgers University, working with Prof. Yanyong Zhang. Before coming to Rutgers, I studied Biomedical Engineering at Southern Medical University as a undergraduate student from 2007 to 2011. I spent half year at HUAWEI Research Center Santa Clara for internship at the begining of my PhD life.

My research interest is focused on Mobile Computing, Machine Learning, Internet of Things (IoT) over Information Centric Network (ICN).


  • I am joining AT&T Research Center for the Summer Intern!

  • I presented our work on PerCom 2016 on March 16th. You can find the slides and the demo poster here.

  • Our paper "Whose Move is it Anyway? Authenticating Smart Wearable Devices Using Unique Head Movement Patterns" has been accepted by PerCom 2016! You can find the paper here.



  1. Sugang Li, Jiachen Chen, Haoyang Yu, Yanyong Zhang, Dipankar Raychaudhuri, Ravishankar Ravindran, Hongju Gao, Lijun Dong, Guoqiang Wang and Hang Liu, “MF-IoT: A MobilityFirst-Based Internet of Things Architecture with Global Reachability and Communication Diversity”, in Proceedings of the 1st IEEE International Conference on Internet-of-Things Design and Implementation (IoTDI), 2016.PDF

  2. Sugang Li, Ashwin Ashok, Chenren Xu, Yanyong Zhang, Janne Lindqvist, and Marco Gruteser, "Whose Move is it Anyway? Authenticating Smart Wearable Devices Using Unique Head Movement Patterns", in Proceedings of IEEE Conference on Pervasive Computing and Communications (PerCom), 2016.PDF

  3. Chenren Xu, Sugang Li, Gang Liu, Yanyong Zhang, Emiliano Miluzzo, Yih-Farn Chen, Jun Li, and Bernhard Firner. "Crowd++: unsupervised speaker count with smartphones." in Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing (Ubicomp), ACM, 2013.PDF


  1. Sugang Li, Yanyong Zhang, Dipankar Raychaudhuri, and Ravishankar Ravindran, Guoqiang Wang, Qingji Zheng, and Lijun Dong. "IoT Middleware Architecture over Information-Centric Network", in Proceedings of the Globecom ICNS (Information Centric Networking Solutions for Real World Applications) workshop 2015.PDF

  2. Sugang Li, Yanyong Zhang, Dipankar Raychaudhuri, and Ravishankar Ravindran. "A comparative study of MobilityFirst and NDN based ICN-IoT architectures", in Proceedings of IEEE Q-ICN Workshop collocated with International Conference on Heterogeneous Networking for Quality, Reliability, Security and Robustness (QShine), 2014.PDF


  1. Chenren Xu, Sugang Li, Yanyong Zhang, Emiliano. Miluzzo, and Yih-farn. Chen, “Crowdsensing the Speaker Count in the Wild: Implications and Applications”, IEEE Communication Magazine, 52(10), pp.92-99, Oct 2014.PDF


Headbanger: We propose a system for direct authentication of users to their head--worn wearable device through a novel approach that identifies users based on motion signatures extracted from their head--movements. This approach is in contrast to existing indirect authentication solutions via smartphone or using touch--pad swipe patterns. The system, dubbed software, is a software authentication solution that leverages unique motion patterns created when users shake their head in response to music played on the head--worn device, and sensed through integrated accelerometers.


MF-IoT: MF-IoT enables efficient communication between devices from different local domains, as well as communication between devices and the infrastructure network. The MF-IoT network layer smoothly handles mobility during communication without interrupting the applications. This is achieved through a transparent translation mechanism at the gateway node that bridges an IoT domain and the core network. In the core network, we leverage MobilityFirst functionalities in providing efficient mobility support for billions of mobile devices through long, persistent IDs, while in the local IoT domain, we use short, local IDs for energy efficiency. By seamlessly translating between the two types of IDs, the gateway organically stitches these two parts of the network.


Crowdpp: Smartphones are excellent mobile sensing platforms, with the microphone in particular being exercised in several audio inference applications. We take smartphone audio inference a step further and demonstrate for the first time that it’s possible to accurately estimate the number of people talking in a certain place – with an average error distance of 1.5 speakers – through unsupervised machine learning analysis on audio segments captured by the smartphones. Inference occurs transparently to the user and no human intervention is needed to derive the classification model. Our results are based on the design, implementation, and evaluation of a system called Crowd++, involving 120 participants in 10 very different environments. We show that no dedicated external hardware or cumbersome supervised learning approaches are needed but only off-the-shelf smartphones used in a transparent manner. We believe our findings have profound implications in many research fields, including social sensing and personal wellbeing assessment. Currently, we are collaborating with RU Psychology Department on stuyding the relation between children's daily social interaction and Autism.