-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance considerations #785
Comments
Which PHP version are you using? PHP 7.0 should have improved performance quite a lot. Also, using static methods would seriously affect testing, and besides that, I don't think Intervention was build with manipulating multiple 128MP images in mind. To be honest, you're better off using direct calls to GD, or even not PHP at all. |
7.0.x and 7.1.9, and the benchmark was run on top of it (with xdebug disabled). GD performance itself is still under question, however, the timing measurement reports baseline very clearly.
They are easily replaced by one-of-a-kind (call them singletons, if you want) object instances. I'm just a little bit javish guy.
It's not about image size, it's about amount of extra work on every call. It can take almost a second for 512x512 image, as you see, and that is quite small.
I basically have no choice here, because the only other library option is Imagine which isn't supported anymore and have the very same issues at least on the client side (i need to throw in new point object and new color object for every operation). However, i don't feel there is sense in some interface that doesn't intend to work. |
Thank you @etki for your insights. Despite the fact there may be performance issues, you might be better off with imagick.exportimagepixels instead of reading an image pixel by pixel manually. |
Hi!
I've been implementing projection conversion library. This actually means dealing with large (up to 16k x 8k) images that are compiled pixel-by-pixel by translating pixels into latitude/longitude and then back to pixels but using different translation rules. I'm accenting that because i have no other way than go on pixel-by-pixel basis, and that is part of everyday image library routine.
So, i was expecting that using library as a medium that doesn't do anything but calls to implementation i would see 2x, may be 3x processing time raise. However, my first test didn't want to end at all, so i finally came with a benchmark:
reproducing: https://github.com/etki/php-image-processing-benchmark
That's 20x slowdown for GD. CPU is doing real work not more than 5% of time. And the only real reason behind imagick being not so slowed down its that it itself is slow as hell and it's taking larger relative part of processing.
So, where did the time go? XDebug extension to the rescue
In my benchmark, both setPixelAt and getPixelAt are taking roughly 50% of the time, but both drown in
Image::__call
(97%) and resulting inIntervention\Image\Gd\Driver->executeCommand
(93%). This method creates new command and executes it; derived commands take ~35.5% of time each. That means that 21% (29%, counting callers) of time (oh my!) is lost simply on instantiating new objects, checking command names and other totally unneeded stuff. Simply deriving work to static functions where necessary would eliminate a lot of this. But it goes even worse:I understand that there will always be overhead on wrapping things in library. But roughly 15% of all execution time goes to the strange Argument call, should any image processing library spend that much time on this? The real workload can't be even seen on this map, it's irrelevant to what library is really doing.
And here goes the killer part:
If XDebug and KCacheGrind haven't went nuts, actual work is taking less than 0.03% of time. Now, this may be something not very accurate: you can see my call to Color::decode taking 0.11% of time (with the same amount of calls), while separate benchmark shows that imagecolorat is usually 5 times slower than Color::decode. I don't have time to dig deeper and find real figures, because estimates are already taking for themselves. Even if we multiply that 0.02% result by 100, it would take just 2% of the whole time taken.
I see this as a major library issue. I really can't use it because it spends my CPU time somewhere else but really working. I do understand the intention to wrap everything in objects and use more human-oriented approach, but there is really no necessity in wrapping simple
querySomething($x, $y);
call in separate object (also, diving in all the magic__call
stuff was highly unpleasant). It probably also hammers all the low-level hardware optimizations brought by bright engineers: if you create an object for every low-level operation, you are abusing RAM and forcing CPU to cache thing that we be thrown right away. I, again, see this as a major issue that should be taken in consideration.The text was updated successfully, but these errors were encountered: