I received my PhD from the Department of Electrical and Computer Engineering at Rutgers University in 2019. My broad research interests lie in the fields of Mobile Computing, Pervasive Systems, with a particular focus on communication and sensing techniques using Visible Light and Capacitive Coupling.
I started working at Standard and 5G Mobility Lab of Samsung Research America since August 2019.
Body-guided Communications. The growing number of devices we interact with require a convenient yet secure solution for user identi!cation, authorization and authentication. Current approaches are cumbersome, susceptible to eavesdropping and relay attacks, or energy ineffcient. In this paper, we propose a body-guided communication mechanism to secure every touch when users interact with a variety of devices and objects. The method is implemented in a hardware token worn on user’s body, for example in the form of a wristband, which interacts with a receiver embedded inside the touched device through a bodyguided channel established when the user touches the device. Experiments show low-power (µJ/bit) operation while achieving superior resilience to attacks, with the received signal at the intended receiver through the body channel being at least 20dB higher than that of an adversary in cm range. [Mobicom 2018][1-min video] | |
Panoptes: Infrastructure Camera Control. Steerable surveillance cameras offer a unique opportunity to support multiple vision applications simultaneously. However, state-of-art camera systems do not support this as they are often limited to one application per camera. We believe that we should break the one-to-one binding between the steerable camera and the application. By doing this we can quickly move the camera to a new view needed to support a different vision application. When done well, the scheduling algorithm can support a larger number of applications over an existing network of surveillance cameras. With this in mind we developed Panoptes, a technique that virtualizes a camera view and presents a different fixed view to different applications. A scheduler uses camera controls to move the camera appropriately providing the expected view for each application in a timely manner, minimizing the impact on application performance. Experiments with a live camera setup demonstrate that Panoptes can support multiple applications, capturing up to 80% more events of interest in a wide scene, compared to a fixed view camera. [IPSN 2017] | |
Activity Sensing using Ceiling Photosensors. This project explores the feasibility of localizing and detecting activities of building occupants using visible light sensing across a mesh of light bulbs. Existing Visible Light activity sensing (VLS) techniques require either light sensors to be deployed on the floor or a person to carry a device. Our approach integrates photosensors with light bulbs and exploits the light reflected off the floor to achieve an entirely device-free and light source based system. This forms a mesh of virtual light barriers across networked lights to track shadows cast by occupants. The design employs a synchronization circuit that implements a time division signaling scheme to differentiate between light sources and a sensitive sensing circuit to detect small changes in weak reflections. Sensor readings are fed into indoor supervised tracking algorithms as well as occupancy and activity recognition classifiers. Our prototype uses modified off- the-shelf LED flood light bulbs and is installed in a typical office conference room. We evaluate the performance of our system in terms of localization, occupancy estimation and activity classification, and find a 0.89m median localization error as well as 93.7% and 93.78% occupancy and activity classification accuracy, respectively. [INFOCOM 2018], [ACM VLCS 2016] | |
TextureCode. Embedded screen–camera communication techniques encode information in screen imagery that can be decoded with a camera receiver yet remains unobtrusive to the human observer. We study the design space for flicker-free embedded screen–camera communication. In particular, we identify an orthogonal dimension to prior work: spatial content-adaptive encoding, and observe that it is essential to combine multiple dimensions to achieve both high capacity and minimal flicker. From these insights, in TextureCode, we develop content-adaptive encoding techniques that exploit visual features such as edges and texture to unobtrusively communicate information. TextureCode is able to achieve an average goodput of about 22 kbps, significantly outperforming existing work while remaining flicker-free. [INFOCOM 2016] | |
Privacy Respecting Cameras. The ubiquity of cameras in today’s world has played a key role in the growth of sensing technology and mobile computing. However, on the other hand, it has also raised serious concerns about privacy of people who are photographed, intentionally or unintentionally. We are exploring the use of near-visible/infrared light communication to design “invisible light beacons" where privacy preferences of photographed users are communicated to cameras. Particularly, we explore a design where the beacon transmitters are worn by users on their eye-wear and transmit a privacy code through ON-OFF patterns of light beams from IR LEDs. [VLCS/MobiCom'14] |
I have been a teaching assistant for the following courses at Rutgers ECE Department: