Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Eye Tracking Functionality and API #2

Open
2 of 10 tasks
SimLeek opened this issue Nov 24, 2021 · 22 comments
Open
2 of 10 tasks

Add Eye Tracking Functionality and API #2

SimLeek opened this issue Nov 24, 2021 · 22 comments
Assignees
Labels
enhancement New feature or request

Comments

@SimLeek
Copy link
Member

SimLeek commented Nov 24, 2021

Is your feature request related to a problem? Please describe.
VRChat needs to see my emotions.

Describe the solution you'd like
This is a bit more than just tracking pupils. In order to get emotions passing into games you need:

  • Eye tracking for both eyes including pupil position and size
  • Infrared Camera for tracking both eyes
  • A simple API that can be expanded on

Then, later on:

  • Track facial shape keys or landmarks around eyes
    • eyebrows
    • upper eyelid closed
    • squinting
    • etc. there are a lot of muscles around the eyes
  • Track the mouth using facial landmarks or shape keys
  • Have an API that allows a few or all tracking features to be implemented.

Additional context
Already got pupil tracking working. Need to work on the hardware and some API now:

2021-06-24_16-48-49.mp4
@SimLeek SimLeek added the enhancement New feature or request label Nov 24, 2021
@okawo80085
Copy link
Member

I'll start experimenting on the driver side of things, hopefully OpenVR driver APIs will be enough

@SimLeek
Copy link
Member Author

SimLeek commented Nov 25, 2021

sweet

@SimLeek SimLeek pinned this issue Nov 25, 2021
@SimLeek
Copy link
Member Author

SimLeek commented Nov 25, 2021

Pinned because it's an epic level feature and maybe one of the most important ones.

@okawo80085
Copy link
Member

Thought i'd drop an update here: Using events for passing data to the application works, will try using it for some test apps soon

@okawo80085
Copy link
Member

Well, using events to pass full eye tracking state didn't work, so we switched to using events to signal if eye tracking is active or not and use shared memory to actually pass gaze state, new driver device is almost done (still needs some cosmetic tweaks).

Hardware vise showmewebcam running on a rpi zero v1.3 with a NoIR module works great, now we just need a better mount for it.

@okawo80085
Copy link
Member

okawo80085 commented Dec 9, 2021

Idea for the AI model for this addon (@SimLeek plz tell me if this is even reasonable or not): During initial setup make the user go through manual calibration where they need to follow a dot on the screen, we record their eye movement and use that to train a tiny model locally just for them. (Also maybe automatically retraining the model at some time interval to stay accurate, again if thats even possible xd)

@SimLeek
Copy link
Member Author

SimLeek commented Dec 9, 2021

I definitely think that's reasonable, however idk about retraining a whole model. More like setting up a simple matrix transform to multiply the model output by to get the exact screen position, or maybe a single or double layer model, idk.

Not 100% sure what to put for the calibration, but I feel like it should be something simple, since retraining the whole neural net could take a few days on a standard pc, depending on the model we choose.

@okawo80085
Copy link
Member

Yeah. i mean very few layers for the whole model, so that even a low spec VR PC could train it relatively fast

@okawo80085
Copy link
Member

Also we could pre train it a bit

@okawo80085
Copy link
Member

okawo80085 commented Dec 9, 2021

@SimLeek Can you experiment with model designs? While i make the calibration app.

The model only needs to have 2 outputs with range (-1, 1) for the gaze direction and maybe one output within the (0, 1) range for how closed the eye is.

@SimLeek
Copy link
Member Author

SimLeek commented Dec 9, 2021

Hmm... I could translate the facial landmarks to eye closed/open state. Gaze direction should doable with just the blob tracking ideally, but that's messy with other stuff in the view.

Will experiment.

@SimLeek
Copy link
Member Author

SimLeek commented Dec 9, 2021

Btw, even with very few layers, some models can take very long to train. This is especially the case with transformers, which can be trained on massive GPUs and then possibly run on the pi.

@okawo80085
Copy link
Member

To not be completely useless today, i'll at least post some photos of the setup i have right now :P

image
image

@ghost
Copy link

ghost commented Mar 12, 2022

this is quite old, assuming this project is dead atm?

@SimLeek
Copy link
Member Author

SimLeek commented Mar 13, 2022

Oh yea, this one's much closer to being done now. However, I think I'll need to edit the todo list, because now we need to figure out the right hardware.

I think the plan of adding it to any headset won't work well because power cord, but an addon to DIY or Valve headsets would still be really good.

@ghost
Copy link

ghost commented Mar 13, 2022

Since there’s some space to place any o’l microcontroller inside where the usb is at, I would think of a pcb design that connects to the USB port that basically is able to interface the raspberry pi zero and have a ribbon cable that connects to cameras below the headset it just needs a 3d printed guard more or less to protect the ribbon cable. As for the gasket, it would definitively need to be modded or at least retrofit in some way that clips onto the gasket instead of 3d printing the entire gasket. Or something like that.

What I’m more curious, is the optimal placement of the camera, assuming someone is playing with the highest fov range on the index (das me)

@okawo80085
Copy link
Member

this is quite old, assuming this project is dead atm?

I'd say it's more stalled than dead, we need someone else to continue hardware design, i can't do it anymore because of the war xD

would think of a pcb design that connects to the USB port that basically is able to interface the raspberry pi zero and have a ribbon cable that connects to cameras below the headset it just needs a 3d printed guard more or less to protect the ribbon cable.

Oh for sure, i had mine in that orientation because i didn't have longer ribbon cables for the camera :/

As for the gasket, it would definitively need to be modded or at least retrofit in some way that clips onto the gasket instead of 3d printing the entire gasket. Or something like that.

I wish that was an option, but the camera and LED cables need to go through the gasket, and making holes in the og gasket is not something i wanted to do with my index, also that camera module needs to sink into the gasket quite a lot (not sure if it's visible in the pictures) so the holes would need to be quite big

What I’m more curious, is the optimal placement of the camera, assuming someone is playing with the highest fov range on the index (das me)

In the pics it was in the highest FOV range, pretty usable, the spot the camera rests in just has a lot of empty space even when the lenses are the closest to your face, but without the foam the lenses were too close to the user's eyes, so i usually had to either dile em back a bit or get something to replace the foam xD

@okawo80085
Copy link
Member

Also the driver side of this thing is almost done, so i'd say the only thing thats missing is the hardware and firmware for the hardware, the latter of which i had some progress on... before the war started... now im not sure if even my hardware prototypes survived :/

@ghost
Copy link

ghost commented Mar 14, 2022

oof, sorry about the war :/

@okawo80085
Copy link
Member

Eh don't worry about it, I'm safe from military actions, just stuck in the west of the country without most of my equipment :/

@ghost
Copy link

ghost commented Mar 19, 2022

would it be fine if you would share some hardware specifics you had in mind for eye tracking? gonna attempt to revamp the look of the hardware for it based on the photo you have to make it more feasible to attach to the headset without the prototype attachments on it, as well as creating an alternative version for people who wants fans on the headset

@okawo80085
Copy link
Member

Oh yeah for sure, my prototype was using a single Raspberry Pi Camera Module 2 NoIR with an IR LED and it all connected to a raspberry pi zero (originally i planned for 2 camera modules, one for each eye, but i couldn't get the expansion module for that in time, so i decided to go with a one eyed version to prototype the software first),
the camera module was oriented to see the eye of the use when wearing the headset (more or less, it was a pain to get a good angle of the user's eye).

The rpi was running showmewebcam so it appeared a normal webcam to the user's PC when connected to the headset's usb port, it then was used in our prototype tracking software (i say prototype because it was just a python script running a single neural network which was really under trained),
then it was supposed to send that data to our driver and the driver would handle the rest,
except that i couldn't finish that device yet, i need to test it more, make some mock apps to see if it behaves as intended etc. (thankfully i can still do that even now, just slower, because i don't have my VR PC :L )

Here are some useful links

Discord server me and other guys working on this usually hang out on (check out the #software and #face-tracking channels): https://discord.gg/rMsV5YBwQ9

Progress on the dev version of the driver that incorporates this device type (github CI is setup, so you can just download the latest built version and try it out if you want, the poser provided with it is very basic though): HoboVR-Labs/hobo_vr#1

The hell is a poser, and other misc things about hobo_vr (this does not include the documentation for the new dev version of the driver though): https://www.hobovrlabs.org/docs/html/getting_started.html#what-is-a-poser

And lastly, good luck and have fun!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants