-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Eye Tracking Functionality and API #2
Comments
I'll start experimenting on the driver side of things, hopefully OpenVR driver APIs will be enough |
sweet |
Pinned because it's an epic level feature and maybe one of the most important ones. |
Thought i'd drop an update here: Using events for passing data to the application works, will try using it for some test apps soon |
Well, using events to pass full eye tracking state didn't work, so we switched to using events to signal if eye tracking is active or not and use shared memory to actually pass gaze state, new driver device is almost done (still needs some cosmetic tweaks). Hardware vise showmewebcam running on a rpi zero v1.3 with a NoIR module works great, now we just need a better mount for it. |
Idea for the AI model for this addon (@SimLeek plz tell me if this is even reasonable or not): During initial setup make the user go through manual calibration where they need to follow a dot on the screen, we record their eye movement and use that to train a tiny model locally just for them. (Also maybe automatically retraining the model at some time interval to stay accurate, again if thats even possible xd) |
I definitely think that's reasonable, however idk about retraining a whole model. More like setting up a simple matrix transform to multiply the model output by to get the exact screen position, or maybe a single or double layer model, idk. Not 100% sure what to put for the calibration, but I feel like it should be something simple, since retraining the whole neural net could take a few days on a standard pc, depending on the model we choose. |
Yeah. i mean very few layers for the whole model, so that even a low spec VR PC could train it relatively fast |
Also we could pre train it a bit |
@SimLeek Can you experiment with model designs? While i make the calibration app. The model only needs to have 2 outputs with range (-1, 1) for the gaze direction and maybe one output within the (0, 1) range for how closed the eye is. |
Hmm... I could translate the facial landmarks to eye closed/open state. Gaze direction should doable with just the blob tracking ideally, but that's messy with other stuff in the view. Will experiment. |
Btw, even with very few layers, some models can take very long to train. This is especially the case with transformers, which can be trained on massive GPUs and then possibly run on the pi. |
this is quite old, assuming this project is dead atm? |
Oh yea, this one's much closer to being done now. However, I think I'll need to edit the todo list, because now we need to figure out the right hardware. I think the plan of adding it to any headset won't work well because power cord, but an addon to DIY or Valve headsets would still be really good. |
Since there’s some space to place any o’l microcontroller inside where the usb is at, I would think of a pcb design that connects to the USB port that basically is able to interface the raspberry pi zero and have a ribbon cable that connects to cameras below the headset it just needs a 3d printed guard more or less to protect the ribbon cable. As for the gasket, it would definitively need to be modded or at least retrofit in some way that clips onto the gasket instead of 3d printing the entire gasket. Or something like that. What I’m more curious, is the optimal placement of the camera, assuming someone is playing with the highest fov range on the index (das me) |
I'd say it's more stalled than dead, we need someone else to continue hardware design, i can't do it anymore because of the war xD
Oh for sure, i had mine in that orientation because i didn't have longer ribbon cables for the camera :/
I wish that was an option, but the camera and LED cables need to go through the gasket, and making holes in the og gasket is not something i wanted to do with my index, also that camera module needs to sink into the gasket quite a lot (not sure if it's visible in the pictures) so the holes would need to be quite big
In the pics it was in the highest FOV range, pretty usable, the spot the camera rests in just has a lot of empty space even when the lenses are the closest to your face, but without the foam the lenses were too close to the user's eyes, so i usually had to either dile em back a bit or get something to replace the foam xD |
Also the driver side of this thing is almost done, so i'd say the only thing thats missing is the hardware and firmware for the hardware, the latter of which i had some progress on... before the war started... now im not sure if even my hardware prototypes survived :/ |
oof, sorry about the war :/ |
Eh don't worry about it, I'm safe from military actions, just stuck in the west of the country without most of my equipment :/ |
would it be fine if you would share some hardware specifics you had in mind for eye tracking? gonna attempt to revamp the look of the hardware for it based on the photo you have to make it more feasible to attach to the headset without the prototype attachments on it, as well as creating an alternative version for people who wants fans on the headset |
Oh yeah for sure, my prototype was using a single The rpi was running Here are some useful links Discord server me and other guys working on this usually hang out on (check out the Progress on the dev version of the driver that incorporates this device type (github CI is setup, so you can just download the latest built version and try it out if you want, the poser provided with it is very basic though): HoboVR-Labs/hobo_vr#1 The hell is a poser, and other misc things about hobo_vr (this does not include the documentation for the new dev version of the driver though): https://www.hobovrlabs.org/docs/html/getting_started.html#what-is-a-poser And lastly, good luck and have fun! |
Is your feature request related to a problem? Please describe.
VRChat needs to see my emotions.
Describe the solution you'd like
This is a bit more than just tracking pupils. In order to get emotions passing into games you need:
Then, later on:
Additional context
Already got pupil tracking working. Need to work on the hardware and some API now:
2021-06-24_16-48-49.mp4
The text was updated successfully, but these errors were encountered: