Jamie Balfour

Welcome to my personal website.

Find out more about me, my personal projects, reviews, courses and much more here.

The biggest hog in ZPE

The biggest hog in ZPE

First things first, what's the biggest, slowest operation in ZPE? The answer to that is file reads and writes.

So to optimise ZPE for writing and reading files, the best option is to reduce these. Just as with Chrome with many Chrome Extensions, having many ZPE plugins and start up configurations slows start up. Each of these configurations needs to be pushed to any children that this ZPE instance has, which in turn reduces the performance.

So what's the solution. So ZPE has always been fast enough to start and compile and even interpret. However, when ZPE spawns a child, or even a thread, the worker child (isn't it a cruel world where a couple of milliseconds into a child's life it's given work to do? 🤪) then has to access all of it's parent's properties. This hinders the performance of the parent more than the child. 

So what's the solution, I will move a lot of the file reads to the main executable and perform these once. Done.

Number 2, condition checking. I have noticed the difference in performance between the loops in ZPE and in native Java. This is the number one issue in ZPE. Whilst ZPE still manages to do this reasonably quickly, it does not compare to native Java.

Why is this? ZPE currently has no optimisation on conditions. This means a static condition such as $i < 10 will still need to be re-evaluated each loop. Now for the fun. How can this be changed? Well I'm not going to reveal everything until I've implemented it but I will give you and idea that it will use compiler based optimisation to optimise the condition beforehand.

Number 3. Function calls. Function calls are extremely quick but are also incredibly slow. Function calls were optimised with version 1.5 after being modified in version 1.4 to use a mapping system. There is a point where all the functions collide and we have a problem, an O(N) problem to be precise! I will look at improving the hash to be more drastic. Of course, this has an effect on memory so it needs to be worth while to do it.

Also, I intend to merge a few things. Namely functions and constants. What?! You don't need to understand but it should improve memory usage.

Update in 2019

This article is a definite interesting read in reflection. These were three key areas identified as being problematic back in the day and have been for a long time. All of these have since been fixed.

File operations are now moved to the parent, extensions are loaded once and stored for children, it's the parent that now does the work, not the children.

Condition checking has been very recently updated again so that it is faster than ever. Now using lazy evaluation (or short-circuit evaluation) ZPE can decide quicker whether a condition is true or false. It also adds safety for indexes (read the documentation for this) which doubles back as a performance improving feature.

Finally, constants are no longer a part of the interpreter and are now compiled into the language as you would expect. This makes performance of constants almost 3 times faster than that of variables in some tests I ran since there is no function lookup, variable lookup and value return process (three steps involved in obtaining a variable) involved. Further, did you know that global variables are slower to access than say a local variable? This is down to the depth of the function too.

Overall, I'm very happy with how ZPE works and how it improved the aforementioned features to some degree or another.

Posted by jamiebalfour04 in Technology
zpe
speed
performance
hog
Comments
Powered by DASH 2.0
Scan and keep for the latest article or review every time!
https://www.jamiebalfour.scot/redirect/action/latest-article/