I'm a Ph.D. student working with Jeff Heer, and a member of the Human-Computer Interaction and Visualization groups at Stanford. My research broadly covers visualization and web design tools, and multi-user multi-surface interactions.
I graduated from UC San Diego where I worked with Jim Hollan to explore interactions with wall-sized displays. During my time at at UCSD's Revelle College, I also helped establish an internship program, and served as a Resident Advisor, an Orientation Leader, and twice as a member of student government.
Papers and Notes
Advances in data mining and knowledge discovery have transformed the way Web sites are designed. However, while visual presentation is an intrinsic part of the Web, traditional data mining techniques ignore render-time page structures and their attributes. This paper introduces design mining for the Web: using knowledge discovery techniques to understand design demographics, automate design curation, and support data-driven design tools. This idea is manifest in Webzeitgeist, a platform for large-scale design mining comprising a repository of over 100,000 Web pages and 100 million design elements. This paper describes the principles driving design mining, the implementation of the Webzeitgeist architecture, and the new class of data-driven design applications it enables.
Large-scale display walls, and the high-resolution visualizations they support, promise to become ubiquitous. Natural interaction with them, especially in collaborative environments, is increasingly important and yet remains an on-going challenge. Part of the problem is a resolution mismatch between low-resolution input devices and high-resolution display walls. In addition, enabling concurrent use by multiple users is difficult. In this paper, we present an overlay interface element superimposed on wall-display applications to help constrain interaction, focus attention on subsections of a display wall, and facilitate collaborative multi-user workflow.
Posters, Demos, and Technical Reports
CHI 2013 features 30-second "Video Preview" summaries for each of 500+ separate events. The Interactive Schedule uses large display screens and mobile applications to help attendees navigate this wealth of video preview in order to identify events they would like to attend.
Researchers have long envisioned a Semantic Web, where unstructured Web content is replaced by documents with rich semantic annotations. This paper introduces a method for automatically "semantifying" structural page elements: using machine learning to train classifiers that can be applied post-hoc.
We present a platform for large-scale machine learning on Web designs, which consists of a Web crawler and proxy server to store a lossless and immutable snapshot of the Web; a page segmenter that codifes a page's visual layout; and crowdsourced metadata which augments segmentations.
arvindsatya at cs dot stanford dot edu