I am a Senior Research Engineer at Samsung Research America. Before that, I was a Ph.D. Candidate at Rutgers University's Wireless Information Network Lab (WINLAB) advised by Dr. Marco Gruteser and Dr. Kristin Dana.
My research interests include mobile computing and computer vision, with a special focus on multi-modal sensing and collaborative perception. My dissertation focuses on fusing wireless communication with vision sensing to improve systems' performance including sensing range, efficiency, tracking and localization.
Previously, I worked with Dr. Kristin Dana on the topic of image retrieval and latent space representation. I received my M.S. degree from Rutgers University in 2018.
PROJECTS
Vi-Fi Cross Modal Association:
Association of cross-domain sensor data is a fundamental need in applications and systems that exploit multi-modal sensor data. With the pervasive use of cameras and wireless devices, one key instance of this problem is the association between persons detected in camera video and wireless data originating from transmitters of these persons.
We propose a multi-modal approach that associates visually detected persons, represented through the bounding boxes generated by an object detector, with a smartphone identifier (e.g., MAC addresses). We demonstrate an application of finding target persons on a surveillance video. Each visually detected participant is tagged with a smartphone ID and the target person with the query ID is highlighted. This work is motivated by the fact that establishing associations between subjects observed in camera images and messages transmitted from their wireless devices can enable fast and reliable tagging. This is particularly helpful when target pedestrians need to be found on public surveillance footage, without the reliance on facial recognition. The underlying system uses a multi-modal approach that leverages WiFi Fine Timing Measurements (FTM) and inertial sensor (IMU) data to associate each visually detected individual with a corresponding smartphone identifier. These smartphone measurements are combined strategically with RGB-D information from the camera, to learn affinity matrices using a multi-modal deep learning network. [Mobisys'21 Demo paper], [Demo slides], [Demo video 1], [Demo video 2] [Vi-Fi Dataset][IPSN'22 paper]
FusionEye:
Automated driving and advanced driver assistancesystems benefit from complete understandings of traffic scenes around vehicles. Information sharing in connected vehicle systems helps each participating vehicle to have a more complete and expanded sensing range beyond its own sensing capability. Existing systems gather such data through cameras and other sensors in vehicles but scene understanding can be limited due to the sensing range of sensors or occlusionfrom other objects. To gather information beyond the view of onevehicle, we propose and explore FusionEye -- a connected vehicle system that allows multiple vehicles to share perception dataover vehicle-to-vehicle communications and collaboratively merge this data into a more complete traffic scene. FusionEye uses a self-adaptive topology merging algorithm based on bipartite graph. We explore its network bandwidth requirements and the trade-off with merging accuracy. Experimental results show that FusionEye creates more complete scenes and achieves a merging accuracy of 88% with 5% packet drop rate and transmission latency around 200ms. [SECON'19 paper], [Slides] FusionEye-Extension:
When sharing visual traffic information among vehicle nodes, it is of great significance to identify overlapping components and associate objects in common to create an accurate and complete surrounding scene. As an Extension to FusionEye, we explore deep learning approaches for real time vehicle verification in the context of V2V collaborative perception. We propose two deep neural network architectures and applied the trained feature embeddings in FusionEye's bipartite association paradigm. Preliminary results show that features learned from vehicle’s appearances and kinematic information improves the verification accuracy to 92%, which provides possible solution for real time system. [MobiSys'19 Rising Stars Forum paper], [Slides]
WiFi FTM & Localization:
Academic and industry research has argued for supporting WiFi time-of-flight measurements to improve WiFi localization. The IEEE 802.11-2016 includes a Fine Time Measurement (FTM) protocol for WiFi ranging, and several WiFi chipsets offer hardware support albeit without fully functional open software. We introduce an open platform for experimenting with fine time measurements and a general, repeatable, and accurate measurement framework for evaluating time-based ranging systems. We analyze the key factors and parameters that affect the ranging performance and revisit standard error correction techniques for WiFi time-based ranging system. The results confirm that meter-level ranging accuracy is possible as promised, but the measurements also show that this can only be onsistently achieved in low-multipath environments such as open outdoor spaces or with denser access point deployments to enable ranging at or above 80 MHz bandwidth. [Mobicom'18 paper], [WiFi FTM Linux Tool]
Driver assistance and vehicular automation would greatly benefit from uninterrupted lane-level vehicle positioning, especially in challenging environments like metropolitan cities. We explore whether the WiFi Fine Time Measurement (FTM) protocol can complement current GPS and odometry systems to achieve lane-level positioning in urban canyons. We introduce Wi-Go, a system that simultaneously tracks vehicles and maps WiFi access point positions by coherently fusing WiFi FTMs, GPS, and vehicle odometry information together. Wi-Go also adaptively controls the FTM messaging rate from clients to prevent high bandwidth usage and congestion, while maximizing the tracking accuracy. Wi-Go achieves lane-level vehicle positioning (1.3 m median and 2.9 m 90-percentile error), through vehicle experiments in the urban canyons of Manhattan, New York City, as well as in suburban areas (0.8 m median and 3.2 m 90-percentile error). [Mobisys'20 paper]
Bryan Bo Cao, Abrar Alali, Hansi Liu, Nicholas Meegan, Marco Gruteser, Kristin Dana, Ashwin Ashok, Shubham Jain.
The Demo session of 19th Annual IEEE International Conference on Sensing, Communication, and Networking(SECON'22 Demo),
Virtual Conference, 20–23 September 2022.
Hansi Liu, Abrar Alali, Mohamed Ibrahim, Bryan Bo Cao, Nicholas Meegan, Hongyu Li, Marco Gruteser, Shubham Jain, Kristin Dana, Ashwin Ashok, Bin Cheng, Hongsheng Lu .
The International Conference on Information Processing in Sensor Networks(IPSN 2022) ,
Milan, Italy, May 4 2022.
Hansi Liu, Abrar Alali, Mohamed Ibrahim, Hongyu Li, Marco Gruteser, Shubham Jain, Kristin
Dana, Ashwin Ashok, Bin Cheng, Hongsheng Lu
The 19th Annual Interna-
tional Conference on Mobile Systems, Applications, and Services,
June 24-July 2, 2021, Virtual, WI, USA. (MobiSys ’21) [Demo slides], [Demo video 1], [Demo video 2]
.
Rising Stars Forum of The 17th ACM International Conference on Mobile Systems, Applications, and Services (MobiSys Rising Stars Forum 2019) ,
Seoul, Republic of Korea, June 21, 2019.
Mohamed Ibrahim, Hansi Liu, Minitha Jawahar, Viet Nguyen, Marco
Gruteser, Richard Howard, Bo Yu, Fan Bai
The 24th Annual International Conference on Mobile Computing and
Networking (MobiCom 2018), New Delhi, India, Oct. 29 - Nov. 2, 2018.
Note: This material is presented to ensure timely
dissemination of scholarly and technical work. Copyright and all rights therein are retained by
authors or by other copyright holders. All persons copying this information are expected to adhere
to the terms and constraints invoked by each author's copyright. In most cases, these works may
not be reposted without the explicit permission of the copyright holder.
PATENTS
Systems and Methods for Matching Objects in Collaborative Perception Messages Using Multiple Adaptive Thresholds
Hansi Liu, Hongsheng Lu, Rui Guo
US Patent Application No. 17/498,238.
Systems and Methods for Matching Objects in Collaborative Perception Messages
US Patent Application No. 17/498,201.
Sparse Excitation Method for 3-dimensional Underground Cable Localization by Fiber Optic Sensing
Ming-Fang Huang, Ting Wang, Hansi Liu
US Patent Application No. 17/195,538. Publication Date: 10/07/2021.
Underground Optical Fiber Cable Localization
Ming-Fang Huang, Hansi Liu, Ting Wang
US Patent Application No. 17/118,529. Publication Date: 06/17/2021.
Method and apparatus for enabling sequential groundview image projection synthesis and complicated scene reconstruction at map anomaly hotspot
Fan Bai, Hansi Liu, David A Craig
US Patent Application No. 16/249,969. Publication Date: 07/23/2020.
Providing Information-rich Map Semantics to Navigation Metric Map
Hansi Liu, Fan Bai and Shuqing Zeng
US Patent Application No. 15/900,149. Publication Date: 06/02/2020.
Note: This material is presented to ensure timely
dissemination of scholarly and technical work. Copyright and all rights therein are retained by
authors or by other copyright holders. All persons copying this information are expected to adhere
to the terms and constraints invoked by each author's copyright. In most cases, these works may
not be reposted without the explicit permission of the copyright holder.
NEWS
Aug 2023
Our paper "ViFiT: Reconstructing Vision Trajectories from IMU and Wi-Fi Fine Time Measurements" has been accepted for publication in MobiCom ISACom 2023.
Jul 2023
Our paper "ViFi-Loc: Multi-modal Pedestrian Localization using GAN with Camera-Phone Correspondences" has been accepted for publication in ICMI 2023.
Sep 27 2022
I defended my Ph.D.
Jul 2022
Our paper "ViTag: Online WiFi Fine Time Measurements Aided Vision-Motion Identity Association in Multi-person Environments" is accepted by SECON'22.
Jan 2022
Our paper "Vi-Fi: Associating Moving Subjects across Vision and Wireless Sensors" is accepted by IPSN'22.
We released the Vi-Fi Dataset, a large-scale multi-modal dataset comprises of vision and wireless data across multiple indoor/outdoor scenarios.
June 2021
I am excited to join ITL group, Toyota North America for summer intersnship. I will be working with Dr. Hongsheng Lu on topics of Collaborative Perception Message (CPM) for connected vehicle systems.
May 2021
Our paper "Demo: Lost and Found! Associating Target Persons in Camera Surveillance Footage with Smartphone Identifiers" is accepted by the demo session of MobiSys'21.
July 2020
Our paper "New Methods for Non-Destructive Underground Fiber Locatlization using Distributed Fiber Optic Sensing Technology" is accepted for publication at OECC 2020.
Mar. 2020
Our paper "Wi-Go: Accurate and Scalable Vehicle Positioning using WiFi Fine Timing Measurement" is accepted for publication at ACM MobiSys 2020.
Sep. 2019
I passed my Ph.D. qualification exam. Thanks to the committee members (Prof. Dipankar
Raychaudhuri, Prof. Yingying Chen, Maria Striki and Prof. Kristin Dana) for their valuable comments
and feedbacks!
June 2019
I will be joining NEC Labs America in Princeton this summer. I will work with Ting Wang on topics of underground optical fiber sensing and localization.
April 2019
Our paper "FusionEye: Perception Sharing for Connected Vehicles and its
Bandwidth-Accuracy Trade-offs" is accepted by SECON2019.
May 2018
I will be joining General Motors R&D Department for summer internship and work on topics of connected vehicle systems.