We are proud to announce the following three keynote speakers for VMV 2017:

Daniel Weiskopf
Eye Tracking for Visualization and Visualization for Eye Tracking
Daniel Weiskopf, Universität Stuttgart 

Abstract
There is a growing interest in eye tracking as a research method and technology in many communities, including the visualization research community, but also in computer graphics, human-computer interaction, applied perception, psychology, or cognitive science. Progress in hardware and the reduction of costs for eye tracking devices have made this analysis technique accessible to a large population of researchers. Recording the observer’s gaze can reveal how dynamic graphical displays are visually accessed and which information are being processed. Such gaze information is available in real time so that eye tracking plays a role as a means of providing quick responses to user interaction and viewing behavior, supporting gaze-contingent displays and visualization. However, the analysis and visualization of spatiotemporal gaze data become challenging factors in this emerging discipline. I discuss the relationship between eye tracking and visualization from two angles: (1) How can eye tracking help understand how users work with visual interfaces, thus serving as a basis to improve computer-based visualization? (2) How can visualization facilitate the analysis of gaze recordings? I argue that it is useful to combine both perspectives, eventually targeting “visualization for visualization (vis4vis)” as a research topic.
Biography
Daniel Weiskopf is a professor and co-director of the Visualization Research Center (VISUS), University of Stuttgart, Germany. He received his Dr. rer. nat. (PhD) degree in physics from the University of Tübingen, Germany (2001), and the Habilitation degree in computer science at the University of Stuttgart, Germany (2005). His research interests include information and scientific visualization, visual analytics, eye tracking, GPU methods, computer graphics, and special and general relativity. He is speaker of the Collaborative Research Center SFB/Transregio 161 “Quantitative Methods for Visual Computing”, which includes eye tracking as a quantification approach, and he is co-initiator of the Workshop on Eye Tracking and Visualization (ETVIS).

 

Bernd Bickel 100120
Computational Fabrication: Creating Virtual Content for the Real World
Bernd Bickel, IST Austria
Abstract

In recent years, computer graphics researchers have contributed significantly in developing novel computational tools for 3D printing. In this talk I will describe recent progress in the area of computational fabrication towards novel concepts for reproducing objects with nontrivial shapes and topologies. Among several projects, I will present FlexMolds, a novel computational approach to automatically design flexible, reusable molds that, once 3D printed, allow us to physically fabricate, by means of liquid casting, multiple copies of complex shapes with rich surface details and complex topology. I will then investigate the design of objects that can self-deform. I will introduce CurveUps, curvy shells that form from an initially flat state. They consist of small rigid tiles that are tightly held together by two pre-stretched elastic sheets attached to them. Our method allows the realization of smooth, doubly curved surfaces that can be fabricated as a flat piece. Once released, the restoring forces of the pre-stretched sheets support the object to take shape in 3D. CurveUps are structurally stable in their target configuration. All approaches will be illustrated with examples. Finally, I will give an outlook of the field and present open challenges.

Biography

Bernd Bickel is an Assistant Professor, heading the Computer Graphics and Digital Fabrication group at IST Austria. He is a computer scientist interested in computer graphics and its overlap into animation, biomechanics, material science, and digital fabrication. His main objective is to push the boundaries of how digital content can be efficiently created, simulated, and reproduced. Bernd obtained his Master's degree in Computer Science from ETH Zurich in 2006. For his PhD studies, Bernd joined the group of Markus Gross who is a full professor of Computer Science at ETH Zurich and the director of Disney Research Zurich. From 2011-2012, Bernd was a visiting professor at TU Berlin, and in 2012 he became a research scientist and research group leader at Disney Research. In early 2015 he joined IST Austria. He received the ETH Medal for outstanding dissertation in 2011, the Eurographics Best PhD Award in 2012, the Microsoft Visual Computing Award in 2015, an ERC Starting Grant in 2016, and the ACM Significant New Research Awards in 2017. Bernd's work focuses on two closely related challenges: (1) developing novel modeling and simulation methods, and (2) investigating efficient representation and editing algorithms for materials and functional objects. Recent work includes: theoretical foundations and practical algorithms for measuring and modeling the deformation behavior of soft tissue; simulating and reproducing fundamental properties, such as elasticity, surface reflectance, and subsurface scattering; and computational design systems for efficiently creating functional artifacts such as deformable objects and mechanical systems.

 

Carol O'Sullivan
The perception of physical interactions in Mixed Reality
Carol O'Sullivan, Trinity College Dublin
Abstract
Causality is perceived when it can be seen that an event causes a particular response to occur. When errors in the laws of physics are perceived, the event no longer appears to be plausible to the viewer. Take the example of a recent augmented reality game for phones: Pokemon Go. When a user “throws” a virtual pokeball, it either hits or misses a virtual target overlaid on the real world. However, there is no physical interaction between the ball and the real world. Now consider playing a similar game in Mixed Reality: the user perceives that the virtual ball is really in her hand; when it is thrown she feels that the forces she has exerted have caused the resulting motion of the ball; When she hits the virtual target or misses and hits a real object, she perceives its response as physically plausible. In this ideal setting, the perception of causality has been maintained. Such experiences in Mixed Reality have not yet been achieved, and in this talk the challenges of doing so will be discussed along with an overview of our previous research results that could help.
Biography
Carol O'Sullivan is the Professor of Visual Computing in Trinity College Dublin and head of the Graphics, Vision and Visualization (GV2) research group. From 2013-2016 she was a Senior Research Scientist at Disney Research in Los Angeles and also spent a year’s sabbatical as a Visiting Professor in Seoul National University from 2012-2013. She joined TCD in 1997 and served as the Dean of Graduate Studies from Jul'2007 to Jul'2010. Her research interests include Graphics & Perception, Computer Animation, Crowd and Human simulation. She was co-Editor in Chief for the ACM Transations on Applied Perception (TAP) for six years. Carol has been a member of many international program committees, reviewer for various journals, and served many times on the papers committees for the ACM SIGGRAPH and Eurographics conferences. She has chaired several conferences and workshops and is currently serving as program co-chair for Intelligent Virtual Agents (IVA 2017) and Motion in Games (MIG 2017).  Prior to her PhD studies, she spent several years in industry working in Software Development. She was elected a fellow of Trinity College for significant research achievement in 2003 and of the European Association for Computer Graphics (Eurographics) in 2007.