You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We need to be able to create custom realities that leverage the native video / view.
Example: I would like to create a custom live video reality where the user's location is fixed (e.g., I've told them to stand at a certain place) or where I heavily smooth (or freeze) GPS location data while the user is doing something in my app.
When we eventually support devices like Tango, I can imagine wanting to leverage the local sensor data in the reality, in ways that are shared across apps.
Seems like this should be possible, but would want to make sure it's efficient.
The text was updated successfully, but these errors were encountered:
We need to be able to create custom realities that leverage the native video / view.
Example: I would like to create a custom live video reality where the user's location is fixed (e.g., I've told them to stand at a certain place) or where I heavily smooth (or freeze) GPS location data while the user is doing something in my app.
When we eventually support devices like Tango, I can imagine wanting to leverage the local sensor data in the reality, in ways that are shared across apps.
Seems like this should be possible, but would want to make sure it's efficient.
The text was updated successfully, but these errors were encountered: