The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
"sfdf"sfdsfsdfsdf This is whay I'm looking for
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
Founder & CEO
Inventor, Patent Holder
Head of Product
Head of Design
Technical Lead, SDK
Consumer Virtual Reality devices have emerged out of an abundance of repurposed high volume, low cost smartphone components. The case of consumer VR was made when Oculus first released their Rift headset and transformed the market with its acquisition from Facebook. The issue with dedicated VR systems was the considerable price point and friction-filled setup process. This results in low scalability, with limited use cases hence the challenges VR manufacturers experience today. On the other side of the market the issue with smartphone-based VR systems was that the experience itself is lackluster - from the lower fidelity experience, comfort of cheap enclosures, and most notably the lack of meaningful interaction. These two basic designs then can either provide high utility at the expense of scalability, or can provide scalability at the expense of utility.
Consumer Virtual Reality devices have emerged out of an abundance of repurposed high volume, low cost smartphone components. The case of consumer VR was made when Oculus first released their Rift headset and transformed the market with its acquisition from Facebook. Since then many companies have taken the concept of repurposing smartphone components with various strategies and embodiments of the idea. Most notably was Google with its introduction of the Cardboard VR headset and SDK. Since the start of these two movements, many companies have jumped in to create variations of the 2 basic designs:
Each approach has its own advantages and disadvantages, however only the former emerged as a sustainable strategy for the long term.
The benefit of dedicated VR systems is often with the immersion and quality of the experience. Low latency processing and sensors provide more fluid and interactive experiences that present high utility and engagement. For smartphone VR, the main advantage is the cheap form factor of the smartphone enclosure that allows easy distribution and high scalability considering the abundance smartphone devices users already have.
The issue with dedicated VR systems was that they were yet another device that had to be purchased, often at a considerable price point and friction-filled onboarding and setup process. This results in low scalability, with limited use cases hence the challenges VR manufacturers experience today. Likewise, the issue with smartphone-based VR systems was that the experience itself is lackluster - from the lower fidelity experience, comfort of cheap enclosures, and most notably the lack of meaningful interaction. This results in typically short 30-second ‘gaze’ experiences that are a novelty but don’t provide meaningful utility. The two basic designs then can either provide high utility at the expense of scalability, or can provide scalability at the expense of utility.
The hard trend when it comes to personal computing is that it will become more engaging and immersive. As smartphone screen sizes and processing power increase, their form factor will also evolve. The direction of the evolution is purely based on the visual experience, hence why the form factor of glasses seems to be the logical conclusion of personal computing devices in the future. In the meantime, we can take advantage of soft trends that are particular to smartphones and their current form factor. So what are the truths here? Again every year smartphones have incremental performance improvements in both capabilities and variety of sensors. From larger more pixel dense displays, upgrades in radio technology like 5G, more cameras with higher capabilities, and additional inputs like proximity and microphone arrays.
If we are to solve for scalability and utility together, we need to 1st address the cost of the potential solution. If we are constrained to within the same cost range of existing smartphone based VR enclosures, how can we then address utility? What other avenues can we explore to deliver that value? Have we considered the full capabilities of smartphones? Have all of the capabilities been utilized to their maximum potential? That is the key question. Therefore to solve for utility, basing the solution on an existing form factor like the VR enclosure would require us to revisit how we are utilizing the smartphone to see if we can gain additional utility from the VR experience. Let’s look at what is needed for utility in a VR environment:
In order to reach a minimum level of utility within a VR experience, I derived a progressive scale from the lowest to highest perceived level of VR experience. The following is a linear progression of utility based on technical capability of the VR device. The goal was to target the minimum viable utility while being realistic with technical capabilities and constraints of smartphone technology. Below is a graph on the left side indicating the lowest utility VR experience and on the right showing the highest level of utility. The goal was to get to 80% utility based on 20% of the technical capabilities of the available smartphone device. Again this is a graph of VR utility, as in what capabilities the user has within the VR environment, not VR experience quality (as this would be a different scale).
Therefore, a new category of device is needed to solve the problem. According to the hypothesis above it may be possible to provide a high utility VR experience using existing smartphone technology repurposed to perform new functions. This is the basis of the product strategy and requirements.
SpaceCard would be an interactive VR environment built as a universal sandbox system with both mobile and VR user interfaces that allows users to upload 3D spaces and objects to the cloud. Users would be able to invite others to collaborate, chat and interact in VR environments real-time. These spaces would be accessible through an instagram-like experience that allowed users to jump into VR environments using nothing more than a mobile app and smartphone accessory.
The product design decisions for SpaceCard were vastly different than with most other VR platforms. Because of the cost-per-unit constraints (less than $10USD/unit) we could not implement any form of electronics into the physical device, but is instead powered using all the available technology and sensors in smartphones to drive interactive VR sessions. This of course meant that we had to effectively manage resources on the smartphone to not overload the electronics and also manage VR fidelity expectations with the user. As SpaceCard would essentially be a mobile app with accompanying accessory, we had to focus on developing key technologies that would enable new capabilities from the smartphone.
Relying on existing smartphone technology to fulfill the majority of VR functionality would enable SpaceCard a new level of cost efficiency. If this technical strategy were to be successful, we could significantly reduce or eliminate the need for electronics in the physical product allowing us to approach price per unit ranges that could allow SpaceCard to enter new categories Because of this, the target price for SpaceCard was less than $10USD/unit. If we were able to reach this price, our product positioning possibilities would greatly increase to fundamentally change how consumers thought of VR devices. We also considered the scalability opportunities this would create, and thought hard about other opportunities for cost cutting and efficiency improvements. At the sub $10USD/unit threshold, shipping costs become vastly more critical. This is why we considered formfactor of the physical device as a major consideration because it could greatly impact the total cost of the unit (Device + Shipping). The cheapest form of shipping is lettermail which would be a tremendously difficult form factor to fit a VR device into, but we felt this would be a worthwhile pursuit.
Therefore, according to the hypothesis above it may be possible to provide a high utility VR experience using existing smartphone technology repurposed to perform new functions. This is the basis of the product strategy and requirements. If we were able to achieve physical cost per unit $10USD with a lettermail form factor, we could open product positioning possibilities never before possible including:
Today’s mobile devices provide much more capability than just a couple years ago and with the right platform, can deliver higher quality and higher utility experiences. We wanted to see what was possible with interactive, free-to-roam spaces combined with the speed of 5G mobile devices, particularly for interactive spatial collaboration.
The smartphone was only able to provide a finite amount of computations and resources to deliver an interactive VR experience. We used electronics in the smartphone to provide the following functionality:
The challenge with the above scenario was that all of these processes were carried out simultaneously in real-time on the smartphone, essentially creating a ‘stress test’ environment pushing thermal limits of the device. Our technical challenge was to manage these resources in an effective manner thus allowing us to ‘ride’ the thermal limitations of the device in a manner that was advantageous for the user experience. To do this, we had to address CPU and GPU computation loads where we could effectively run the VR session at near maximum load as to extend the most time the device can operate at maximum performance before hitting its thermal limit. This meant our two biggest computational processes including visual graphics and computer vision processing had to be exceptionally lightweight and efficient.
The hands-on gesture control turns the user’s smartphone camera into a real-time gesture sensor. We developed a computer vision processing algorithms that utilizes the smartphone camera and LED light (torch) to process and track a left and right marker in real-time in all lighting conditions. The controllers are magnetically latched on the either side of the SpaceCard visor and can be held by the user’s hands. From tapping, to grabbing and swiping, the SpaceCard controllers provide full multitouch interaction similar to the experience of a large touchscreen. This technology converts the camera into a Virtual Reality sensor.
Because our controllers had no bluetooth, buttons additional feedback mechanisms that would communicate back to the smartphone, we developed gesture recognition to allow for full controller functionality:
As the smartphone camera has a limited field of view, it creates restrictions around the boundaries of that space which we call the interaction area. Because a smartphone display and camera are used simultaneously by SpaceCard (one to render the virtual reality scene and the other to track the hands) it creates an uncomfortably high interaction area for the user causing strain on the arms and shoulders. As we cannot reliably increase the field of view of the rear-facing camera with mirrors, fisheye lens adapters or add additional cameras, our only option is to rotate the camera, and thus the entire smartphone assembly down at least 30 degrees to have a better view of the hands. However this now creates a problem for the user’s ability to see the display as it too, rotates with the camera. Typical VR headsets have the facial plane and the display plane perfectly parallel to each other for the user’s eyes to properly focus on the image rendered by the display. Our solution, then, is to implement a unique optics system that allows the display plane of the smartphone to be offset from the user’s facial plane, while still keeping the image in focus for the user. There are three main components to this optics system:
3,058,447
"A SYSTEM FOR GENERATING AND DISPLAYING INTERACTIVE MIXED-REALITY VIDEO ON MOBILE DEVICES"
6,205,0432
"SYSTEM FOR INTERACTIVE ENVIRONMENTS"
SpaceCard would be positioned as the entry-level VR experience for mass consumption. We wanted to focus on a target persona that was most open to using VR outside of a gaming setting, and based on research we decided to focus the product to students from k12 to post secondary. This demographic is particularly open to new technologies and modalities of interaction with limited investment in existing systems. Use cases for SpaceCard would be focused around introducing younger students to the metaverse in fun and collaborative spaces, while also catering to students in post secondary education. Students in College and University with a need to visualize their designs in Retail, Interior or Architectural design courses would also benefit from this technology.
Patent-pending optics and industrial design enable a full VR system to collapse into an envelope form-factor that ships like letter mail with a 3 step pop-up setup ready for use within seconds. These two technologies put SpaceCard® at the very top of VR ‘price performance’ and simplicity for an unmatched experience at 20X the scalability compared to the industry average. Zero-setup magnetic paddles give users VR hands for interacting with virtual spaces. Users can grab or stow the magnetic paddles from either side of the visor for use during VR sessions. These magnetic paddles give users VR hands in virtual spaces powered by machine-vision technology.
The origami-inspired, collapsible VR system is made of a single sheet of polypropylene plastic manufactured with a ‘package dieline’ - inspired process. B2B customers can customize the look of SpaceCard® as part of a campaign to suit their brand with a wide variety of traditional and sustainable materials. SpaceCard®’s patent-pending optics and industrial design enable a full VR system to collapse into an envelope form-factor that ships like a postcard with a 3 step pop-up setup ready for use within seconds. The SpaceCard hardware system has two states:
SpaceCard UX design was an exceptionally unique challenge because it required multi-level thinking to solve complex problems in many different areas. The user experience had to be solved in 6 user epics all interconnected with each other.
The industry average VR session duration at the time was between 3-5 minutes in VR for a first time user. Because of the thermal limitations, we knew there was a duration limitation to how long VR sessions could run on the smartphone. Because of the limitation, we created increased focus on creative solutions to continue the application experience beyond the VR session, and have it bleed back into the ‘mobile’ mode. We called this the 'Dual Mode Back and Forth Operation'.
Mobile VR has a unique advantage over any other VR platform is that it inherently enables one app to have two modes of operation based on device orientation.
Therefore we can use the VR session length limitation to our advantage by encouraging a back and forth experience between the vertical and horizontal orientations of the smartphone. This allows us to create a sense of focus around the VR session by explicitly creating a time limit in the session, while also providing a continuation of the experience in a vertical orientation as well. This strategy could work well with many different use cases where a task or selection is performed in mobile or vertical mode, the experience continues in VR or horizontal mode to perform immersive tasks, and the experience continues in mobile mode to provide supporting information to the immersive experience. Supporting information could be a session summary, questionnaire or other tasks that can augment the immersive experience or encourage a mode switch back to VR again. We enabled this functionality at any point in the experience by simply changing the orientation of the device. The switch back to vertical also gives the smartphone an opportunity to cool down and restore thermals to continue further VR sessions. To the user, this is presented as ‘taking a break’ with a 3-minute lockout from VR until the device is ready for the next VR session.
This is the opening and unboxing experience of the product itself. The goal of this experience is to minimize required steps, friction and setup processes to start a VR session. Smartphone VR systems typically have an assembly phase where the user is required to set up and ‘build’ the enclosure in a DIY fashion. Our aspiration with the UX was to completely outdo any other VR system when it comes to time to VR. We measure the effectiveness of this experience by the amount of steps required for the user to have the visor on in the VR state starting from the collapsed envelope state. We devised a system to require only 3 steps to full assembly from when the product is in its collapse state to its expanded visor state. This helped set the expectation for the rest of the user experience.
Once the visor is set up, the user is required to download the mobile app which of course begins in the mobile or ‘vertical’ state. The mobile UX had within it user stories that started with device calibration and once completed, would begin the vertical version of the app designed around space selection. From a UI design perspective, we wanted to mimic an instagram-like experience where various spaces would be shown in an infinite scroll manner. Once selected, each space would be shown in a 3D preview that rendered the space in a dollhouse perspective allowing the user to ‘jump in’ with a yellow walking symbol. This would begin the transition into VR mode.
Once the user has selected to begin a VR session, they need to be onboarded with SpaceCard’s controller or ‘paddle’ system. This is an onboarding process that gets the user comfortable and familiar with using controllers in VR. Because the controllers were magnetically latched onto the sides of the visor, they were more a part of the visor design as opposed to standalone controllers like in most VR systems. We decided to first familiarize the user with the location of the controllers by grabbing them and familiarizing themselves with their shape and function before they had the headset on. This experience would be guided by an animated tutorial with voice overs while the smartphone was still in vertical mode.
Once the user was comfortable with the location and ergonomics of the controllers, they would be asked to place them back in their nested location and then put on the visor. The transition into VR started with the user placing the smartphone into the visor, and putting the visor on where a VR version of the controller onboarding would commence. Because the user already had familiarity with the controller and its location on the visor, we were able to dramatically reduce friction and task failure.
Once the user has performed the necessary onboarding, they are now ready to engage, interact and perform tasks within the VR environment. There were several key innovations in the VR experience that allowed the experience to flourish within the limitations we had.
User Interface Layout There were three main components of the user interface split into 3 vertical zones. The top vertical zone, just above the user’s main line of sight was dedicated to status information and menu toggles. The second and central vertical zone was the interactive environment which took up the majority of central visual real estate. The third vertical zone was just below the user’s main line of sight and was dedicated to locomotion.
Top Vertical Zone Menu & Countdown The top vertical zone was used to engage the main menu and focus the user on time remaining in the sesion with a horizontal progress bar. These two components were always within sight, and were engaged with a gaze gesture to minimize the user from lifting their hand in high and uncomfortable positions. The menu was used to manage objects in the space, and change perspective of the experience. Once the 10 minute counter expired, the user would be requested to take off the visor and take a break.
Central Vertical Zone Object Mechanics The central vertical zone was used to tap, drag and interact with the virtual environment. Gestures would be utilized to manipulate the environment in ways never before possible in mobile VR. Objects also had their own interactive menus that provided simple functionality like move, delete, rotation etc.
Bottom Vertical Zone Locomotion Due to the lack of 6DoF we decided to use hand gestures to translate into user movement within the virtual environment. We created a singular button below the users line of sight in the bottom vertical zone. Once this button was engaged, the user would be moved forward based on their hand movement. This was an extremely difficult technical challenge as many users would be susceptible to motion sickness with z translation in VR. We solved this problem by using hand Z movement acceleration to drive the virtual camera movement in real time and simulate a ‘swimming’ experience. The gesture to move forward was similar to tapping a set of drums with both hands, but created a comfortable and physiologically acceptable method of moving through the space while ‘fooling’ the user into thinking they are moving with a button press. The inclusion of stepping sound effects further reinforced the sensation of walking.
Perspective & Space Toggle We felt it was critically important for the user to be able to easily manipulate the space they are in by offering various vantage and access points for the user to teleport to. This need for easy movement was also further enhanced by allowing the user to toggle between variations of the space, and different spaces altogether in a seamless manner.
Once the user has completed their VR session, the application offboards the user from VR mode to remove the visor and provide a summary of their VR experience. This was a useful segway into picking up the phone and continuing the experience from a handheld perspective. We believe this back and forth mechanism provides many useful opportunities in the future.
Multi-User collaboration was an important part of the SpaceCard experience and needed to take advantage of as many smartphone modalities as possible to reduce friction. Because SpaceCard is fundamentally a mobile app, it can take advantage of everything like notifications and pop ups. We wanted to make inviting users into shared VR experiences as easy as making a phone call. The user interface for inviting folks was split into two user stories: a) invite users via notifications b) share room codes and have users manually join. Every VR experience took place within a room session. Each session had its own unique room code that could be shared and accessed by other SpaceCard users.
Likewise, hosts had the ability to send invitations to other SpaceCard users via app notifications where invitees could either accept or decline the invite. During VR sessions, we took advantage of real-time headtracking coordinates, location and hand position in real-time to create abstract avatars of each user. Users could speak to one another in real-time using the smartphones onboard loudspeaker and microphone as well. We utilized as many Unity3D-based photon networking functionality to bring collaboration to life on SpaceCard. This is feature would result in the world’s first mobile VR metaverse.
SpaceCard® is a mobile technology and popup VR system that delivers an ‘Instagram-like’ experience for exploring and jumping into VR spaces using your smartphone. Unlike other systems, SpaceCard® uses the smartphone camera and machine-vision technology to track the users’ hands. This allows users to interact in drag-n-drop environments, perform walkthroughs and configure shared spaces in real time. We needed to communicate that SpaceCard was a cutting edge technology available for everyone at a low cost. One of the biggest challenges with VR in general is the stigma that the experience would be cumbersome and complicated. We wanted to avoid that by leaning on smartphone modalities, simple interfaces, calls to action and using familiar terminology from the App Store world. At the end of the day, SpaceCard is just a mobile app.
We needed to distinguish between who our users are, and who our customers are. Targeting the education space meant that our users would be students from ages 12-25 (k12 to post secondary) who would use the technology. However our customers would be the procurement officers, teachers and professors in these professions. We needed to speak to the new generation of students to create a pull for the product, but also address implementation and cost concerns with the schools and universities. Therefore our marketing and launch strategy needed two distinct voices and address our users and customers.
SpaceCard was a name derived from the fundamental concept of the product - a postcard for interactive spaces. We used this name in combination with an aesthetic and visual style seen in NASA space programs with the use of black, yellow and silver to denote a futurist aspiration as if the product was out of space!
With such a complex product, with dozens of features and capabilities we wanted to incorporate as many visual elements of the user interface as possible. With a heavy emphasis on animated GIFs, the web experience had many requirements to fulfill in explaining the product in as simple a way as possible.
Much like our web experience, the launch video had to run through all 6 user epics within 60 seconds. This was no small feat but we were able to include all necessary content into the video with the help of environmental cues, real-time video captures and 3D animations. The idea was to imprint on the viewer the full capabilities of the product, while highlighting that it all came from a postcard and mobile app.
The software development kit program was an offshoot of our developer evangelism efforts that resulted in us building a full repository of documentation, videos and supporting material for developers building custom Unity3D apps for our platform.
As part of our product validation, we targeted Architectural Design schools, programs and courses along with commercial design studios to pilot the product and provide initial feedback. To do this we designed variations of our standard presentation deck that targeted various stakeholders and decision makers in the space.