Workshops & Tutorials Overview

Preliminary Schedule

/files/images/wt_preliminary_scheduel.png

W1 - Enterprise AR Functional Requirements Workshop

Organizers

  • Michael Rygol, the AR for Enterprise Alliance
  • Christine Perey, PEREY Research & Consulting

Website

https://www.perey.com/ISMAR2018-Enterprise-AR-Functional-Requirements-Workshop/

Important Dates

  • Submission of enterprise scenarios and requirements via workshop forms: August 14, 2018
  • Notification of acceptance and feedback from committee reviewers: September 01, 2018
  • Date of the workshop: October 16, 2018

Overview

This workshop focuses on the functional requirements for enterprise AR components. Enterprise AR customers have requirements that differ substantially from those of consumers. Having functional requirements directly benefits enterprise customers: products and services will have interoperability, customer RFPs will be easier to create and respond to, and research as well as development communities will have more clear understanding of the requirements of enterprise AR buyers.

Those ISMAR attendees conducting research about enterprise AR and providers of AR components and solutions will have clear definitions of customer needs. This will lead to the highest value research and greater enterprise AR project success which can then be used to influence research agendas, development roadmaps and future products.

A preliminary set of enterprise AR requirements was created in 2016 through a collaboration between UI LABS (DMDII) and the AREA and delivered through a project led by Lockheed Martin, Caterpillar and Procter & Gamble. In 2017 and 2018, through several additional cycles of input by stakeholders, these requirements have since been refined.

This workshop will shed new light on the requirements’ current status, and provide valuable inputs to the further refinement and applications of the enterprise AR requirement documents.


W2 - 2nd International Workshop on Multimodal Virtual & Augmented Reality (MVAR)

Organizers

  • Wolfgang Hürst, Utrecht University, Netherlands
  • Daisuke Iwai, Osaka University, Japan
  • Klen Copic Pucihar, University of Primorska, Slovenia
  • Matjaz Kljun, University of Primorska, Slovenia

Website

http://mvar.science.uu.nl/2018/index.html

Important Dates

  • Submission deadline: July 13, 2018
  • Acceptance notifications: August 14, 2018
  • Camera ready versions due: September 04, 2018
  • Date of the workshop: October 16, 2018

Overview

Despite recent progress in display technology, we are still far from the ultimate goal of creating new virtual environments or augmentations of existing ones that feel and react similarly as their real counterparts. Many challenges and open research questions remain – mostly in the areas of multimodality and interaction. For example, current setups predominantly focus on visual and auditory senses, neglecting other modalities such as touch and smell that are an integral part of how we experience the real world around us. Likewise, it is still an open question how to best interact and communicate with a virtual world or virtual objects in AR. Multimodal interaction offers great potential to not only make this experience more realistic, but also to provide more powerful and efficient means of interacting with virtual and augmented worlds. The aim of this workshop is therefore to investigate any aspects about multimodality and multimodal interaction in relation to VR and AR. What are the most pressing research questions? What are the most difficult challenges? What opportunities do other modalities than vision offer for VR and AR? What are new and better ways for interaction with virtual objects and for an improved experience of VR and AR worlds?

/files/images/workshop_mvar.png

W3 - 1st International Workshop on Multimedia Analysis for Architecture, Design and Visual Reality Games (MADVR)

Organizers

  • Konstantinos Avgerinakis, Information Technologies Institute
  • Francesco Bellotti, University of Genoa
  • Maarten Vergauwen, KU Leuven, Technology Campus Ghent
  • Leo Wanner, Universitat Pompeu Fabra
  • Stefanos Vrochidis, Information Technologies Institute

Website

http://mklab.iti.gr/madvr2018/

Important Dates

  • Paper submissions: July 16, 2018
  • Notification of acceptance: August 14, 2018
  • Camera ready paper and registration: September 04, 2018
  • Date of the workshop: October 16, 2018

Overview

MADVR 2018 aims at presenting the most recent works in the area of multimedia analysis in the context of applications for architecture, design and Virtual Reality (VR) games. Nowadays large amounts of visual and textual data have been generated, which are of great interest to architects and video game designers, such as paintings, archival footage, documentaries, movies, reviews or catalogues, and artwork. However in their current form it is difficult to be reused and repurposed for creative industries applications such as game creation, architecture and design.

This is currently envisioned by recent research projects (e.g. H2020 V4Design, H2020 Inception, H2020 DigiArt, H2020 REPLICATE, etc.), which focus on developing technologies that allow for automatic content analysis and seamless transformation to assist the creative industries in sharing content and maximize its exploitation. In this context, MADVR has a special interest in image and video analysis, 3d reconstruction, multilingual analysis that can be applied for VR game authoring and design applications.


W4 - Towards An Immersive Web Workshop

Organizers

  • Jens Grubert, University of Applied Sciences Coburg
  • Blair MacIntyre, Mozilla and Georgia Institute of Technology
  • Rob Manson, Awe.media
  • Christine Perey, PEREY Research & Consulting

Website

http://www.perey.com/ISMAR2018-Towards-Immersive-Web-Workshop/

Important Dates

  • Draft paper submission to workshop program committee: July 23, 2018
  • Notification of acceptance and feedback from committee reviewers: August 14, 2018
  • Camera Ready Deadline for the adjunct proceedings: September 4, 2018
  • Date of the workshop: October 20, 2018

Overview

In the future, Web browsers will provide effective support for Virtual and Augmented Reality experience delivery, allowing access to various familiar and novel user interface paradigms. As they have grown accustomed to doing on the current Web, users of Web-based Virtual and Augmented Reality will control a wide variety of experiences, without leaving the browser. They may experience, without running multiple simultaneous apps or “windows,” the interaction of the digital and physical worlds composed of content from multiple sources. Just as current Web sites support user-generated and user-edited content, and to mashup elements from different sources, users of Web-based Virtual and Augmented experiences will search for, combine and change the content they experience and to author new experiences. However, at present, developers of Virtual and Augmented Reality experiences face several challenges in realizing this vision of seamless experience delivery through the Web.

This workshop brings together experts from domains such as Web technologies, computer vision, human-computer interaction or software engineering, to:

  • Discuss the state of the Immersive Web
  • Discuss possible paths to full support of Web-based interfaces, interaction models and conventions compatible with Web-based Virtual and Augmented Reality.
  • Increase awareness of Web-based Virtual and Augmented Reality in the research and development communities.
/files/images/workshop_tiww.png

W5 - Creativity in Designing With & For Mixed Reality

Organizers

  • Jouke Verlinden, University of Antwerp
  • Regan Watts, University of Antwerp
  • Doris Aschenbrenner, Delft University of Technology
  • Stephan Lukosh, Delft University of Technology

Website

https://mrcreative2018.wordpress.com

Important Dates

  • Deadline for submission: August 1, 2018
  • Notification of acceptance: August 14, 2018
  • Camera-ready deadline: September 4, 2018
  • Date of the workshop: October 20, 2018

Overview

Although developments in devices and software are maturing towards novel Mixed Reality systems, there is too little connection to the design field. Especially if AR is combined with other “smart” technologies (internet of things), perspectives shift from merely technical characteristics and quantifiable human factors to more complex UX scenarios. Although there are other special interest groups/conferences that in part cover this theme (CHI, IUI, UIST), we would think that ISMAR is a better venue in connecting the graphics/tracking community with design researchers. We also would like to address the lack of software engineering skills with design students/professionals. How can we bridge these disciplines and silos of innovation?

This workshop invites both industrial and academic participants to contribute to this debate, first of all by submitting extended abstracts that cover case studies, best practices and challenges in design for/with AR. To cater for a design debate, we strongly encourage submissions of annotated artworks/3D scenes/pictures/floorplans as w

/files/images/workshop_cdmr.png

W6 - International Workshop on Comfort Intelligence with AR for Autonomous Vehicle 2018

Organizers

  • Masayuki Kanbara, Nara Institute of Science and Technology
  • Itaru Kitahara, Tsukuba University
  • Kiyoshi Kiyokawa, Nara Institute of Science and Technology

Website

http://ambient.naist.jp/ciav2018/

Important Dates

  • Paper submission deadline: July 01, 2018
  • Notification of acceptance: August 14, 2018
  • Camera ready deadline: September 04, 2018
  • Date of the workshop: October 20, 2018

Overview

Many researchers and companies have been developing technologies for autonomous vehicle. Most technologies are focused on safety control and efficient path planning of autonomous driving. To accept autonomous vehicle socially, a comfort of passenger who is driver being free from driving is one of important issues. Passengers of autonomous vehicle feel discomfort when the vehicle behaves unexpectedly or moves on unexpected path. In addition, the problem of motion sickness will be increased in autonomous vehicle because the passenger will not be able to understand the behavior of the vehicle. In near future, since a windscreen of autonomous vehicle will become an AR display, it is expected that the problems of a VR sickness will be increased too. For these reasons, there will be many discomfort factors of passengers in autonomous vehicle. This workshop focuses on technologies for improvement of passenger’s comfort in autonomous vehicle, such as sensing methods of passengers and environment, AR technology, AR user interface and AR display in autonomous vehicle.

/files/images/workshop_ciav.png

W7 - Virtual and Augmented Reality for Good (VAR4Good)

Organizers

  • Arindam Dey, University of South Australia
  • Mark Billinghurst, University of South Australia
  • Gregory Welch, The University of Central Florida
  • Rojas Muñoz, Purdue University

Website

http://ar4good.org/

Important Dates

  • Submission Deadline: July 15, 2018
  • Notification of Acceptance: August 14, 2018
  • Camera Ready Due: September 1, 2018
  • Date of the workshop: October 20, 2018

Overview

Virtual Reality (VR) and Augmented Reality (AR) are becoming mainstream. With the research and technological advances, it is now possible to use these technologies in almost all domains and places. This provides a bigger opportunity to create applications that intend to impact society in greater ways than beyond just entertainment. Today the world is facing different challenges including healthcare, environment, and education. Now is the time to explore how VR/AR might be used to solve widespread societal challenges.

The third Virtual and Augmented Reality for Good (VAR4Good) workshop will bring together researchers, developers, and industry partners in presenting and promoting research that intends to solve real-world problems using VR/AR. The workshop will provide a platform to grow a research community that discusses challenges and opportunities to create Virtual and Augmented Reality for Good.

We invite application and position papers (2-4 pages, excluding references), that address the way that VR/AR technologies can solve real-world problems in various application domains including, but not limited to, health, the environment, education, sports, the arts, and applications in support of special needs such as assistive, adaptive, and rehabilitative applications. Our focus and preference will be on applications that are beyond general uses of VR/AR. Please see full CFP on our website.


T1 - Cognitive Aspects of Interaction in Virtual and Augmented Reality Systems (CAIVARS)

Presenters

  • Manuela Chessa, University of Genoa, Italy
  • Guido Maiello, Justus-Liebig-Universität Gießen, Germany
  • Fabio Solari, University of Genoa, Italy
  • Dimitri Ognibene, University of Essex, UK
  • Giovanni Maria Farinella, University of Catania, Italy
  • David Rudrauf, University of Geneva, Switzerland

Website

https://sites.google.com/view/caivars18/

Overview

Augmented (AR) and Virtual Reality (VR) systems are designed to immerse humans into rich and compelling simulated environments by leveraging our perceptual systems (i.e. vision, touch, sound). In turn, exposure to AR/VR can reshape and alter our perceptual processing by tapping into the brain’s significant ability to adapt to changes in the environment through neural plasticity. Therefore, to design successful AR/VR systems, we must first understand the functioning and limitations of our perceptual and cognitive systems. We can then tailor AR/VR technology to optimally stimulate our senses and maximize user experience. Understanding how to wield AR/VR tools to reshape how we perceive the world also has incredible potential for societal and clinical applications.

By attending this tutorial, ISMAR attendees will learn how to employ our current understanding of human perception to design more sophisticated and ecological AR/VR systems that optimize user experience and minimize or even eliminate the undesired side effects. Additionally, AR/VR researchers attending this tutorial will learn how to exploit the brain’s neuroplasticity to reshape users’ conscious experiences in virtual environments. This knowledge could help evolve AR/VR systems from simple entertainment devices into tools to reshape human perception and behavior


T2 - Storytelling for Cinematic Virtual Reality

Presenters

  • Mirjam Vosmeer, Amsterdam University of Applied Sciences

Website

Will be announced shortly

Overview

In this tutorial, I will present the VR projects that I have worked on at the Amsterdam University of Applied Sciences, for our research project named ‘Storytelling for 360° Media’.

There are still many gaps in the knowledge on how to produce good VR content. Also, we actually don’t know that much yet on how audiences react to VR, and how we can measure concepts like presence, engagement and immersion.

In collaboration with industry partners and students, we have therefore set up a series of VR projects that were intended to explore the new film language that needs to be developed to fully benefit the possibilities of Cinematic VR. In every project, an element of storytelling has been explored. By reflecting on the results of our projects, I will share what we have learned about VR storytelling and how VR may be validated and evaluated.

This tutorial is not a static summing up of theoretical insights, but rather a discussion of a set of cases and issues that have led to insights into storytelling for VR. I will share our research questions, reflect on what went wrong - and eventually share the new questions that have arisen while doing it.


T3 - Large-Scale 3D Point Cloud Processing for Mixed and Augmented Reality

Presenters

  • Dorit Borrmann, University of Würzburg
  • Andreas Nuechter, University of Würzburg
  • Thomas Wiemann, University of Osnabrück

Website

http://slam6d.sourceforge.net

Overview

The rapid development of 3D scanning technology combined with state-of-the-art mapping algorithms allows to capture 3D point clouds with high resolution and accuracy. The high amount of data collected with LiDAR, RGB-D cameras or generated through SfM approaches makes the direct use of the recorded data for realistic rendering and simulation problematic. Therefore, these point clouds have to be transformed into representations that fulfill the computational requirements for VR and AR setups.

In this tutorial participants will be introduced to state-of-the-art methods in point cloud processing and surface reconstruction with open source software to learn the benefits for AR and VR applications by interleaved presentations, software demonstrations and software trials. The focus lies on 3D point cloud data structures (range images, octrees, k-d trees) and algorithms, and their implementation in C/C++. Surface reconstruction using Marching Cubes and other meshing methods will play another central role. Reference material for subtopics like 3D point cloud registration and SLAM, calibration, filtering, segmentation, meshing, and large scale surface reconstruction will be provided. Participants are invited to bring their Linux, MacOS or Windows laptops to gain hands-on experience on practical problems occuring when working with large scale 3D point clouds in VR and AR applications.

 

Sponsors

 SILVER

Mozilla

Intel

BRONZE

Oculus

(Become one)