This workshop focuses on the functional requirements for enterprise AR components. Enterprise AR customers have requirements that differ substantially from those of consumers. Having functional requirements directly benefits enterprise customers: products and services will have interoperability, customer RFPs will be easier to create and respond to, and research as well as development communities will have more clear understanding of the requirements of enterprise AR buyers.
Those ISMAR attendees conducting research about enterprise AR and providers of AR components and solutions will have clear definitions of customer needs. This will lead to the highest value research and greater enterprise AR project success which can then be used to influence research agendas, development roadmaps and future products.
A preliminary set of enterprise AR requirements was created in 2016 through a collaboration between UI LABS (DMDII) and the AREA and delivered through a project led by Lockheed Martin, Caterpillar and Procter & Gamble. In 2017 and 2018, through several additional cycles of input by stakeholders, these requirements have since been refined.
This workshop will shed new light on the requirements’ current status, and provide valuable inputs to the further refinement and applications of the enterprise AR requirement documents.
Despite recent progress in display technology, we are still far from the ultimate goal of creating new virtual environments or augmentations of existing ones that feel and react similarly as their real counterparts. Many challenges and open research questions remain – mostly in the areas of multimodality and interaction. For example, current setups predominantly focus on visual and auditory senses, neglecting other modalities such as touch and smell that are an integral part of how we experience the real world around us. Likewise, it is still an open question how to best interact and communicate with a virtual world or virtual objects in AR. Multimodal interaction offers great potential to not only make this experience more realistic, but also to provide more powerful and efficient means of interacting with virtual and augmented worlds. The aim of this workshop is therefore to investigate any aspects about multimodality and multimodal interaction in relation to VR and AR. What are the most pressing research questions? What are the most difficult challenges? What opportunities do other modalities than vision offer for VR and AR? What are new and better ways for interaction with virtual objects and for an improved experience of VR and AR worlds?
MADVR 2018 aims at presenting the most recent works in the area of multimedia analysis in the context of applications for architecture, design and Virtual Reality (VR) games. Nowadays large amounts of visual and textual data have been generated, which are of great interest to architects and video game designers, such as paintings, archival footage, documentaries, movies, reviews or catalogues, and artwork. However in their current form it is difficult to be reused and repurposed for creative industries applications such as game creation, architecture and design.
This is currently envisioned by recent research projects (e.g. H2020 V4Design, H2020 Inception, H2020 DigiArt, H2020 REPLICATE, etc.), which focus on developing technologies that allow for automatic content analysis and seamless transformation to assist the creative industries in sharing content and maximize its exploitation. In this context, MADVR has a special interest in image and video analysis, 3d reconstruction, multilingual analysis that can be applied for VR game authoring and design applications.
Although developments in devices and software are maturing towards novel Mixed Reality systems, there is too little connection to the design field. Especially if AR is combined with other “smart” technologies (internet of things), perspectives shift from merely technical characteristics and quantifiable human factors to more complex UX scenarios. Although there are other special interest groups/conferences that in part cover this theme (CHI, IUI, UIST), we would think that ISMAR is a better venue in connecting the graphics/tracking community with design researchers. We also would like to address the lack of software engineering skills with design students/professionals. How can we bridge these disciplines and silos of innovation?
This workshop invites both industrial and academic participants to contribute to this debate, first of all by submitting extended abstracts that cover case studies, best practices and challenges in design for/with AR. To cater for a design debate, we strongly encourage submissions of annotated artworks/3D scenes/pictures/floorplans as well as more traditional papers.
Many researchers and companies have been developing technologies for autonomous vehicle. Most technologies are focused on safety control and efficient path planning of autonomous driving. To accept autonomous vehicle socially, a comfort of passenger who is driver being free from driving is one of important issues. Passengers of autonomous vehicle feel discomfort when the vehicle behaves unexpectedly or moves on unexpected path. In addition, the problem of motion sickness will be increased in autonomous vehicle because the passenger will not be able to understand the behavior of the vehicle. In near future, since a windscreen of autonomous vehicle will become an AR display, it is expected that the problems of a VR sickness will be increased too. For these reasons, there will be many discomfort factors of passengers in autonomous vehicle. This workshop focuses on technologies for improvement of passenger’s comfort in autonomous vehicle, such as sensing methods of passengers and environment, AR technology, AR user interface and AR display in autonomous vehicle.
Virtual Reality (VR) and Augmented Reality (AR) are becoming mainstream. With the research and technological advances, it is now possible to use these technologies in almost all domains and places. This provides a bigger opportunity to create applications that intend to impact society in greater ways than beyond just entertainment. Today the world is facing different challenges including healthcare, environment, and education. Now is the time to explore how VR/AR might be used to solve widespread societal challenges.
The third Virtual and Augmented Reality for Good (VAR4Good) workshop will bring together researchers, developers, and industry partners in presenting and promoting research that intends to solve real-world problems using VR/AR. The workshop will provide a platform to grow a research community that discusses challenges and opportunities to create Virtual and Augmented Reality for Good.
We invite application and position papers (2-4 pages, excluding references), that address the way that VR/AR technologies can solve real-world problems in various application domains including, but not limited to, health, the environment, education, sports, the arts, and applications in support of special needs such as assistive, adaptive, and rehabilitative applications. Our focus and preference will be on applications that are beyond general uses of VR/AR. Please see full CFP on our website.