Shop Slows To A Crawl, Fatal Error

Haven;t touched the shop in months, but out of the blue we've discovered our shop has slowed to a crawl, taking minutes to respond. We reboot and its remedied briefly, but after 20 minutes or so it slows again.

We clear cache and templates, with the same short term improvement.

We also seem to get this fatal error on reboot:

PHP Fatal error: Call to undefined function Tygh\ob_get_clean() in /var/www/html/ourshop.com/app/Tygh/Ajax.php on line 111

Any thoughts?

ob_get_clean() is not a cs-cart function and shouldn't be referenced as being in the Tygh namespace. It is a php native function for capturing and clearing an output buffer.

Would help to know what version your store is running and whether the slowness is only on the frontend or front and back and any other details you can provide.

When you say 'reboot' are you talking about rebooting your server? Or just restarting httpd?

Hi Tony,

Thanks for weighing in.

We're running 4.03 MVE on ubuntu

Slowness (to a standstill) on front and back ends.

We're using the "reboot" command from within the AWS console for the instance that the site runs on.

From the terminal I'm seeing a few www-data processes eventually overwhelming the cpu over time. Usually takes 20 minutes or so from reboot. Server seems to be high on idle and low on wait states leading me to believe it application specific.

PHP processes should generally be very short-lived unless you're doing a backup, import/export, etc.

So if you mean httpd processes (your reference to 'www-data' processes).

You should look at your "access logs" to see if all those connections are coming for the same IP and then backtrack from there.

I've seen in the past where some kinds of caching proxy servers (mostly used by large organizations or government sites) like to do read-ahead of all links on a page so they are cached for their users behind their firewalls. But I haven't seen this for quite some time.

I guess the first thing I'd try to do is determine if the load is coming from robots or real users. I would find the fatal error in your OP troubling.

Additionally we have a large series of errors in apache2 as follows:

Hundreds of these:

[Wed Oct 26 17:04:04.618768 2016] [:error] [pid 3232] [client 157.55.39.4:21150] script '/var/www/html/ourserver.com/index.php' not found or unable to stat

Eventually followed by dozens like this:

[Wed Oct 26 19:45:03.564155 2016] [core:notice] [pid 1286] AH00052: child pid 6104 exit signal Segmentation fault (11)

Reboot produces a number of SIGTERM and SIGKILL errors as it attempts to kill the errant processes.

Since you're running AWS (which I hate working with), you probably have ownership/permissions set incorrectly.

By default in AWS, Apache runs as nobody/nobody and your files are probably owned by cpanel_user/nobody but the modes are probably incorrect.

Verify that directories are set to mode 02775 (set group id and read/write by owner/group read by others) and files are set to 0664 (read/write by owner/group read by others).

The set-group-id on directories ensures that any files that are created by cs-cart within those directories are always created using the group_id of the parent directory rather than the group_id of the creator. Hence they should always be owned by group 'nobody'.

Example from one of my client's AWS site:

]# ls -ld . .. var app design index.php
drwsrwsr-x 37 sgcomadmin nobody      4096 Oct 24 13:59 ./
drwx--x--x 22 sgcomadmin sgcomadmin 36864 Oct 26 16:14 ../
drwsrwsr-x  9 sgcomadmin nobody      4096 Dec 24  2015 app/
drwsrwsr-x  4 sgcomadmin nobody      4096 Dec 24  2015 design/
-rw-rw-r--  1 sgcomadmin nobody      1300 Oct 19 14:30 index.php
drwsrwsr-x 14 sgcomadmin nobody      4096 Oct 21 14:47 var/

Found a staggering number of connections from a range of IPs in France all running apps called ahrefs and something from semrush

Added them to my htaccess Denied list.

Crossing my fingers that this is it.

Thanks for the pointers Tony - you know your stuff!

No problem. In today's world, there are just way too many ways for things to go wrong and too many compartments to get caught into. With the proliferation of CDN and cloud based environments like AWS, problems only get harder to find/diagnose.

Glad you may be working again...