Video surveillance is a very effective means of monitoring activities over a large area with cameras as extended eyes. However, this additional security comes at the cost of privacy loss of the citizens not involved in any illicit activities. The challenge here is how to quantify privacy loss. In the thesis, the privacy loss is modeled as an adversary's ability to correlate sensitive information to the identity of the individuals in the video. Anonymity based approach is used to consolidate the identity leakage through explicit channels of bodily cues such as facial information; and other implicit channels that exist due to what, when, and where information. The proposed framework provides more robust privacy loss measures and better tradeo? of security and privacy.
The surveillance task is target (people, vehicle, etc.) centric and the level of human attention depends on the number of the targets in the camera view. I model this workload as a Markov chain and propose a dynamic workload assignment method that equalizes the number of targets monitored by each operator by dynamically changing the camera-to-operator assignment, hence improving surveillance effectiveness.
Background subtraction is a commonly employed approach for tracking in scenarios where the ambience is more or less constant in terms of illumination and number of objects. However in office environments, where the illumination can very easily change by switching off or on lights, the back- ground subtraction method can lead to erroneous tracking. In this paper we propose a neurobiology-saliency based particle filter approach that uses low-level features like color, luminance and edge information along with motion cues to track a single person. We have tested our method on clips showing a single person carrying out a range of activities expected in an office environment. Our method performs better than a background subtraction method using a Kalman filter, in terms of the number of frames showing correct tracking and change detection for automatic initialization of tracks.
In recent years, we have seen a significant research interest in a number of multimodal sensing applications like surveillance, video ethnography, tele-presence, assisted living, life blogging etc. How- ever, these applications are currently evolving as separate silos with no interconnection. Further, the individual application-centric architectures typically tend to focus on specific sensors, specific (hard- wired) queries and deal with specific environments. We present a generic sensing architecture ‘Observation System’, which allows multiple users to undertake different applications through abstracted interaction with a common set of sensors. The observation sys- tem observes behavior of various objects in an environment and keeps a record of important events and activities in an event-base. In this system, multifarious data collected from disparate sensors and other sources are correlated to understand and gain insights in the environment. The observation system has applications in many areas including but not limited to surveillance, traffic monitoring, ethnography, marketing, and healthcare. In this paper, we present the architecture and functionality of such a system and present de- tails of activity detection using multiple sensor streams in a distributed sensing environment. We also present results of this approach and potential extensions to the analysis of more complex activities and events.
With the proliferation of mobile video cameras, it is becoming easier for users to capture videos of live performances and socially share them with friends and public. As an attendee of such live performances typically has limited mobility, each video camera is able to capture only from a range of restricted viewing angles and distance, producing a rather monotonous video clip. At such performances, however, different users, likely from different angles and distances, can capture multiple video clips. These videos can be combined to produce a more interesting and representative mashup of the live performances for broadcasting and sharing. The earlier works select video shots merely based on the quality of currently available videos. In real video editing process, however, recent selection history plays an important role in choosing future shots. In this work, we present MoViMash, a framework for automatic online video mashup that makes smooth shot transitions to cover the performance from diverse perspectives. Shot transition and shot length distributions are learned from professionally edited videos. Further, we introduce view quality assessment in the framework to filter out shaky, occluded, and tilted videos. To the best of our knowledge, this is the first attempt to incorporate history-based diversity measurement, state-based video editing rules, and view quality in automated video mashup generations. Experimental results have been provided to demonstrate the effectiveness of MoViMash framework.
The main vision of Internet of Things (IoT) is to equip real life physical objects with computing and communication power so that they can interact with each other for social good. As one of the important members of Internet of Things (IoT), vehicles have seen steep advancement in communication technology. In this paper we instantiate IoT to define a social network of vehicles, tNote, where vehicles can share transport related safety, efficiency, and comfort notes with each other. We leverage the infrastructure laid down by Vehicular Ad-Hoc Networks (VANETs) to propose architecture for social network of vehicles in the paradigm of Social Internet of Things (SIoT). We have identified the social structures of vehicles, their relationship types, interactions and the components to manage the system. We also define the tNote message structure following the Dedicated Short Range Communication (DSRC) standard. The paper ends with prototype implementation details of the tNote message and the proposed system architecture along with experimental results..
Personality plays an important role in the personalized experience of ambient environments. Several aspects of ambient environments, such as advertisement and marketing, can adapt according to individual user’s personality. Several works have been published that seek to acquire various personality traits by analyzing Internet usage statistics. Researchers have used Facebook, Twitter, YouTube, and various other nostalgic websites to collect usage statistics. However, we are still far from a successful outcome. In this paper, we use a range of divergent features of the Facebook and LinkedIn social networks, both separately and collectively, in order to achieve better results. Our experimental results show that the accuracy of personality detection improves with the use of complementary features of multiple social networks. To the best of our knowledge, this is the first work on detecting personality from multiple social networks. Furthermore, this is also the first attempt to detect personality from the LinkedIn social network in an automated manner.