Make Cs-Cart Blazing Fast!

Hello,

At the moment the template engine (smarty) by default checks if any content related to smarty was changed on the page (smarty content). But if you have a static website (like me) that doesnt change the layouts and anything related to this you might want to disable this feature. As Smarty will skip the step of checking the fact that any of these templates have changed.

! You can however still edit things, for example if you change the product description it will also be changed on the storefront (without clearing the cache), just as many other things:

- Images,

- Descriptions,

- Carts,

- Banners,

- etc.

So only the layout will not be checked!

This made a huge difference for me, my TTFB dropped from 900ms to about 250ms :shock: (on categories pages, product pages, home page, etc.).

You can enable this feature as well. You just have to change this file:

/app/lib/vendor/smarty/libs/Smarty.class.php

And you have to change "$compile_check = true" to "$compile_check = false"

Do use this at your own risk!

Best wishes,

P.S. If you have any more tips on making CS-Cart blazing fast feel free to add them down below!

Hi.

CS-Cart already handles this Smarty setting. By default, compile check is disabled. However, it may be enabled in case of development mode is enabled, or debugger is active, or the "Rebuild cache automatically" setting is enabled at "Design -> Themes" section of administration area.

Sorry if I've ruined your major breakthrough in web performance optimisation :)

Sorry if I've ruined your major breakthrough in web performance optimisation :)

It did work for me, though I have the rebuild cache automatically disabled, development mode is false, basically everything that could possible slow this down.

Maybe this is a problem with PHP 7.1? Because now I am curious why it did get my TTFB down.

Hi.

CS-Cart already handles this Smarty setting. By default, compile check is disabled. However, it may be enabled in case of development mode is enabled, or debugger is active, or the "Rebuild cache automatically" setting is enabled at "Design -> Themes" section of administration area.


app/functions/fn.init.php



Sorry if I've ruined your major breakthrough in web performance optimisation :)


Please don't be rude, we all trying to improve cs-cart :)
app/functions/fn.init.php




Please don't be rude, we all trying to improve cs-cart :)

This got me even more curious about the fact that it still got my TTFB down. Because I have:

1. Development mode disabled

2. I am not using debug mode

3. Rebuild cache automatically is disabled

And I have requested the page multiple times for it to generate the cache properly.

Just one more question Vali, is there a good way to make the compilation time from smarty less. As my TTFB on large category pages still is about 300 to 400 ms (and this is a local hosted server, so real world performance would be about 400 to 500 ms) or is there no real way to decrease this?

P.S. I tried redis cache for the templates but it caused many failed transactions somehow, so using redis for this isnt an option. Furthermore, my stores are multilangual and can therefore not use the fullpage caching method as this will break language detection. Any suggestions?

Are you on SSD based hosting? Did you try visiting homepage by adding ?debug=1 to the URL. It helped me understand the DB queries were the main reasons for the TTFB being very high. You can use redis and all other cache systems in the world but CS-Cart is running a really long query to pull products to show on your homepage and just that query takes around 300 ms. If you are trying to bring TTFB down, this and other queries add up very quickly. I would suggest cutting down blocks on your homepage and see if it makes a difference. It even makes sense to create HTML based blocks and update the contents through cron job to fill in product information yourself.

Are you on SSD based hosting? Did you try visiting homepage by adding ?debug=1 to the URL. It helped me understand the DB queries were the main reasons for the TTFB being very high. You can use redis and all other cache systems in the world but CS-Cart is running a really long query to pull products to show on your homepage and just that query takes around 300 ms. If you are trying to bring TTFB down, this and other queries add up very quickly. I would suggest cutting down blocks on your homepage and see if it makes a difference. It even makes sense to create HTML based blocks and update the contents through cron job to fill in product information yourself.

Yes I am on SSD based hosting. I tried adding debug to the url but this still didnt help. The only thing that caught my attention was the huge time for the styles compilation in the logging file as this takes almost 300ms. Furthermore my database queries are handled in 17 ms so the problem for me isnt in the database I have going on.

Are you on SSD based hosting? Did you try visiting homepage by adding ?debug=1 to the URL. It helped me understand the DB queries were the main reasons for the TTFB being very high. You can use redis and all other cache systems in the world but CS-Cart is running a really long query to pull products to show on your homepage and just that query takes around 300 ms. If you are trying to bring TTFB down, this and other queries add up very quickly. I would suggest cutting down blocks on your homepage and see if it makes a difference. It even makes sense to create HTML based blocks and update the contents through cron job to fill in product information yourself.

Okay, now it is just getting weirder for me. I have just tried the debug menu again but now it says that my categories page is loaded in 0,18 seconds. But my TTFB is almost 500 ms? Is this just the time it takes my computer to process my request or is this something that would need to be optimized in my server?

For example I use nginx, php 7.1 (with opcache), latest mysql version, etc. Basically everything you need for a fast environment. Any recommendations?

TTFB includes the time required data packets to be transferred from the webserver to your device through all of the network nodes like routers, wi-fi subnetworks, etc.

In other words, TTFB is always slower than the raw server response time. In short - the closer you’re to the server - the less TTFB you’ll get.

Please, run the

ping example.com -c 10
command on the device where you’re getting bad TTFB (where example.com is your website domain).

You’ll see something like that:
$ ping example.com -c 10
PING example.com (193.70.37.175) 56(84) bytes of data.
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=1 ttl=51 time=59.4 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=2 ttl=51 time=60.3 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=3 ttl=51 time=60.8 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=4 ttl=51 time=60.8 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=5 ttl=51 time=60.6 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=6 ttl=51 time=60.0 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=7 ttl=51 time=60.8 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=8 ttl=51 time=60.3 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=9 ttl=51 time=60.1 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=10 ttl=51 time=60.1 ms

example.com ping statistics —
10 packets transmitted, 10 received, 0% packet loss, time 9033ms
rtt min/avg/max/mdev = 59.479/60.369/60.885/0.528 ms

‘60.369’ from the last line is the averate round trip time (RTT) in ms, a time required for TCP packet to get to the destination server. This is a time between sending the TCP packet and receiving a confirmation of it’s successful (or not) trip.

So, this RTT is a network latency you can’t avoid, it will be included into your TTFB one or more times depending on the network quality (in case of the packet loss, which is very common for wi-fi networks, the packet will be sent again, so you have to wait one more RTT plus the time required to admit the packet was actually lost).

It’s a very common mistake to measure website server software perfomance using the TTFB you’re getting at your device, which can be influenced by a lot of factors unrelated to server performance.

Of course, such metric (TTFB at the endpoint device) is a logical endpoint, measuring how fast end-user will receive the page. However, fixing the bad TTFBs should include more complex steps like server geotargeting (placing a server closer to the country which generates most traffic/sales) or geo-replication, meaning you have a lot of servers across the world for highest service availability

TTFB includes the time required data packets to be transferred from the webserver to your device through all of the network nodes like routers, wi-fi subnetworks, etc.

In other words, TTFB is always slower than the raw server response time. In short - the closer you're to the server - the less TTFB you'll get.

Please, run the

ping example.com -c 10
command on the device where you're getting bad TTFB (where example.com is your website domain).

You'll see something like that:
$ ping example.com -c 10
PING example.com (193.70.37.175) 56(84) bytes of data.
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=1 ttl=51 time=59.4 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=2 ttl=51 time=60.3 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=3 ttl=51 time=60.8 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=4 ttl=51 time=60.8 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=5 ttl=51 time=60.6 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=6 ttl=51 time=60.0 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=7 ttl=51 time=60.8 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=8 ttl=51 time=60.3 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=9 ttl=51 time=60.1 ms
64 bytes from 175.ip-193-70-37.eu (193.70.37.175): icmp_seq=10 ttl=51 time=60.1 ms

example.com ping statistics —
10 packets transmitted, 10 received, 0% packet loss, time 9033ms
rtt min/avg/max/mdev = 59.479/60.369/60.885/0.528 ms


‘60.369’ from the last line is the averate round trip time (RTT) in ms, a time required for TCP packet to get to the destination server. This is a time between sending the TCP packet and receiving a confirmation of it’s successful (or not) trip.



So, this RTT is a network latency you can’t avoid, it will be included into your TTFB one or more times depending on the network quality (in case of the packet loss, which is very common for wi-fi networks, the packet will be sent again, so you have to wait one more RTT plus the time required to admit the packet was actually lost).



It’s a very common mistake to measure website server software perfomance using the TTFB you’re getting at your device, which can be influenced by a lot of factors unrelated to server performance.



Of course, such metric (TTFB at the endpoint device) is a logical endpoint, measuring how fast end-user will receive the page. However, fixing the bad TTFBs should include more complex steps like server geotargeting (placing a server closer to the country which generates most traffic/sales) or geo-replication, meaning you have a lot of servers across the world for highest service availability

Well I think this is almost neglectable, my server responds within 5 ms in the ping test. That's why I was getting confused as well.

Interesting thing is that changing

293: public $compile_check = false;

Did the result for me!

From STABLE server response time around 1000ms, it was dropped to around 400 ms (!!!). Will keep on this in Google Webmaster for a week or so.

Is it any negative things observed after that change? May be some pages start to work incorrectly?