-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rotate camera orientation #5
Comments
^bump same issue, more or less. i updated my plist to show portrait w/ home button at bottom (normal orientation) and attached the result. the "face landmark mask" is drawn upright but, the camera feed is rotated landscape. @zweigraf maybe you could find some time to document how the orientation gets determined/configured? I've rotated AVCaptureVideoPreviewLayer before but rotating AVSampleBufferDisplayLayer doesn't seem possible since i couldn't find examples off google. |
@stanchiang I found that if you change the camera orientation to portrait or landscape the values still are width = 640 and height = 480. I think it maybe the camera hardware will alsway output the buffer with this size. What you can do is rotating the image when copying pixel value from CVPixelBuffer to dlib::array2ddlib::bgr_pixel. In addition you need to rotate the input face rect. |
can you add some sample code for your implementation? i was trying to do something like that but it wasn't working right. |
For copying pixel values:
For rotate face rect, I think you should code yourself for convenience. You only need to change the |
@hoangdado this code doesn't work for me, have you tested this ? |
@teresakozera I solved your problem. You only need to update
@stanchiang You can follow this solution. It is much easier than that I recommend you before. View my fork project for source code https://github.com/hoangdado/face-landmarking-ios Notice: With my fix, I don't know why the mouth landmarks is not exactly correct while the others is perfect! |
@hoangdado That fixed the issue, Thank you! |
@hoangdado thanks that helped a lot! I also had to do an affine transformation on the layer so that the output isn't mirrored the opposite way with:
|
@hoangdado, thank you! it works perfect, no distortion- even in the mouth region. :) previously I tried to manipulate |
@teresakozera for me the distorting is more of a stability issue when trying to maintain the tracking as the tolerance for difference angled faces seems to have gone down a bit for me when I try moving my face and the mask gets jittery. Am I facing a different issue than you guys? |
@stanchiang I also observed this problem, but it also existed previously, with the ladndscape orientation. In my case it's not that big of an issue as I need it mostly in the direct position of the head towards camera. But I will also try to fix it- if I succeed I will certainly let you know. :) |
@teresakozera something off topic, could you please tell me how you got the landmarking lines working ? all I see in the app is the dots thanks |
@ArtSebus probably just used the function dlib::draw_line(img, <#const point &p1#>, <#const point &p2#>, <#const pixel_type &val#>); |
@stanchiang could you please suggest what should I pass in for the parameters |
@ArtSebus haven't touched c in a few years myself haha. bit it looks like you's need to pass in a couple dlib::point that you want to connect and then specify what type of line you want to draw for the last one. I'd try inputing 3 for the value. No reason for that number, its just the same number that was used when drawing the dots in the existing code. |
@teresakozera trying something a little different right now. i'm storing shape.parts[60-67] which make up the mouth in a separate array and trying to pass it into UIKit/SceneKit to draw it separately.
converting from pixels to points using this function https://gist.github.com/jordiboehmelopez/3168819 The problem it still seems stuck in the old bounds. sort of like the old screenshot i posted. I wasn't expecting this problem because we call |
@ArtSebus line method, have a look at this: https://github.com/chili-epfl/attention-tracker/blob/master/README.md :) @stanchiang - hmmm... a little bit odd. So with code from here and all the changes it works but when you try the above (a conversion from pixels to points) it displays landmarks in the other orientation? Does it happen after it is passed to UIKit or you check it before? |
@teresakozera - solved the transformation issue. is was my own fault. but now i noticed there is an issue where my cgpoint coordinates have a weird offset for some reason. for example in my gamescene.swift file i had to add here's my code to show you what i mean |
@stanchiang- I will have a look at it on Monday, as today I'm heading for a little bit longer weekend. Anyway I hope you manage to solve this problem earlier. :) Have a nice weekend! |
@hoangdado @stanchiang Thanks! I used your solution and almost all problems solved. Later, I found the better - just I think - way. I made pull request: #9 . In this pr, I convert faceObject in SessionHandler for fitting the given orientation. Even If connection's orientation is portrait, it works well. How do you think?? |
@teresakozera Could you suggest how to integrate attentionTracker into the project, I've tried for some time but still stuck and haven't got anywhere. almost close to pull my hairs out |
i want to crop landmarked portion of the face...... i want only the face can any one help me for this |
@stanchiang You could easily change "VideoMirrored" mode with this one instead of doing some manual transforms |
@stanchiang I want to be able to support detection in both horizontal and vertical screens,can you provide sample demo ? |
bump |
@liamwalsh were you able to find a solution? I'm having the same problem as you. |
@liamwalsh I found I was putting connection.videoOrientation = AVCaptureVideoOrientation.portrait in the wrong captureOutput function. It now works for me: |
First of thanks for your link. I have succesfully run your code but not have one issue that you face earlier may be offset where my cgpoint coordinates have a weird offset for some reason. Here I am getting error:
can you please help me to come out. Thanks |
I managed to fix the issue of this solution not working on the latest version of the code base; just 4 years after. If you go through the instructions provided in #5 That's because the author has pushed a "simplified version" of DlibWrapper class, so I had to go through the previous commit history and I found it (the one in May 2016). First and foremost, replace the entirety of your DlibWrapper.mm file to the following:
I applied the changes made in the comments in issue #5 so that now it supports portrait mode. Next change you need to make is...
BE AWARE! There are TWO captureOutput(s) — you must choose the one which has NO code inside of the function block except for a simple print line. Then you wanna add the following inside the function: So the final version of SessionHandler's captureOutput function will look like the following:
BOOM. All issues have been resolved. OH BY THE WAY, add Hope that helps, I know maybe I'm a little too late now that the Vision framework/ARKit framework is out, but in the case you're writing C++ code in tandem with Swift and want to import these stuff from C++ side too using Objective-C++ — this is a tutorial for you! John Seong |
Never mind — you're supposed to add |
ANOTHER UPDATE - just replace the whole captureOutput to the following:
You also have to tinker with changing the .map part to |
I had tried to set the camera orientation to Landscape or Portrait but the below code (in DlibWrapper.mm) still return width = 640 and height = 480 (with preset is AVCaptureSessionPreset640x480).
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
Then I couldn't do the landmark detection in Portrait view. Could you fix it?
The text was updated successfully, but these errors were encountered: