Skip to content

uuberoy/senior-thesis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 

Repository files navigation

Learning and Simulating Human Movement in Public Spaces

Abstract

Prior urban studies initiatives have used observational methods to capture and model human movement in public spaces, resulting in static visualizations that highlight generalizable behaviors and trends. Motivated by the limitations of static representations to portray dynamic movement, this thesis presents a proof-of-concept tool that uses a data-driven approach to learn and simulate human movement in urban public spaces. Human movement is learned with a reinforcement learning model proposed by Kitani et al. in “Activity Forecasting” (2012). This model looks at human interactions with the static features of a scene -- buildings, cars, grass, etc. -- to predict paths of movement. I retrain this model with videos of pedestrian-friendly scenes to get a set of feature weights that convey the influence of each static feature on human movement. With these feature weights, I feed sample images of public spaces into an Optimal Control (OC) model that forecasts a trajectory between a specified source and destination. The user also has the ability to paint over the image to add in additional static features to see how the predicted trajectories change accordingly. I use an adapted form of Dijkstra’s Shortest Path algorithm (DSP) to find the maximum likelihood single-line path from source to destination. I stitch motion capture figures from CMU's MoCap dataset along the path to simulate this movement. My findings reveal that retraining the OC model with more pedestrian-friendly scenes improves the model’s sensitivity to static features like “grass” and “pavement” while decreasing its sensitivity to features like “car.” Further, an analysis of my path-finding algorithm shows that the overall cost of the paths outputted by DSP is lower than that of straight-line paths, highlighting the plausibility of the simulated patterns of movement. This tool thus presents an al- ternate, dynamic form of spatial representation for architects and urban planners, having the potential to increase understanding of human interaction with spaces that could lead to increasingly people-sensitive built environments.

Navigating this repository

Under code/ you can find two Python Jupyter notebooks that constitute the bulk of my project:

  • image_processing.ipynb: code for processing the raw VIRAT video files into the three kinds of intermediate data required by the HIOC model, and an evaluation of the feature weights outputted by the original and retrained model. The code for training the model itself is taken from the Activity Forecasting code base and can be found here under "Inverse Optimal Control (IOC) Demo": http://www.cs.cmu.edu/~kkitani/datasets/index.html
  • simulation_processing.ipynb: code for processing the likelihood matrices returned from the Optimal Control model, implementing the adapted DSP algorithm, and loading and stitching the MoCap figures along the outputted paths. The code for running the Optimal Control model can be found here under "Optimal Control (OC) Demo": http://www.cs.cmu.edu/~kkitani/datasets/index.html

    Under animations/ you can find sample .mp4 files of the outputted simulations:
  • compiled_animation.mp4: a compilation of simulations generated for a sample scene of a plaza, a scene from Barcelona, a scene of a parking lot, and a scene from a WebCam of Times Square.
  • suburban_animation.mp4: a longer simulation generated for the sample plaza scene (5 paths).

More information

None of the intermediate data used to run the Jupyter notebooks has been provided here. To request this data or to learn more about the project, please contact [email protected].

About

Princeton University Senior Thesis

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published