shmem cache, anyone using it?

I am curious is or has anyone used shmem for cache? Is it any faster than sql-lite?

By definition it would be faster since there is no file IO whatsoever once it's built. I think I tried it a while back and switched out due to some problem, but I don't remember exactly what it was. That was also probably back in the 2.1.x time frame. Could be more stable now.



It should be pretty efficient since shmem is shared across processes so there's only one instance of the cached data in memory regardless of how many processes are accessing it.



Reading memory is always faster than reading file data even if it's structured in a database.

[quote name='tbirnseth' timestamp='1321640326' post='126243']

By definition it would be faster since there is no file IO whatsoever once it's built. I think I tried it a while back and switched out due to some problem, but I don't remember exactly what it was. That was also probably back in the 2.1.x time frame. Could be more stable now.



It should be pretty efficient since shmem is shared across processes so there's only one instance of the cached data in memory regardless of how many processes are accessing it.



Reading memory is always faster than reading file data even if it's structured in a database.

[/quote]



Thanks. I'll see about giving it a try. You seem to sure know a lot about CSC. Any suggestions on who would be the best to host with for a VPS (a vps because i need to host at least five sites.). The budget is around $50 a month probably.

A lot of this is basic systems engineering (which I've done for 30+ years). But I've also worked almost exclusively with cs-cart for 3 years or so now. Given that my business is B2B consulting, I tend to learn a few things along the way…



I have no input on hosting. Way too many variables to account for. Just because it's a VPS doesn't mean a thing. It simply means that you have control over the operating system environment but you're still sharing the hardware with other sites/users. So depending on the host, that could be good or bad…



Cs-cart is a hog of server resources. I've stopped providing any new hosting services because I can't do it cost-effectively for even moderate activity sites.

[quote name=‘tbirnseth’ timestamp=‘1321643881’ post=‘126256’]

A lot of this is basic systems engineering (which I’ve done for 30+ years). But I’ve also worked almost exclusively with cs-cart for 3 years or so now. Given that my business is B2B consulting, I tend to learn a few things along the way…



I have no input on hosting. Way too many variables to account for. Just because it’s a VPS doesn’t mean a thing. It simply means that you have control over the operating system environment but you’re still sharing the hardware with other sites/users. So depending on the host, that could be good or bad…



Cs-cart is a hog of server resources. I’ve stopped providing any new hosting services because I can’t do it cost-effectively for even moderate activity sites.

[/quote]



Thanks for your input. I had no idea CSC was so intense on resources when i got into it. Now I’m so

far into it I have to make it all work :)



maybe a hybrid vps will be the direction i am forced to go. better than vps, right??

Don't know… “Hybrid VPS” is a marketing term.

Just remember you are sharing hardware. So it's all about how much the host loads the hardware and how many of the VPS's contend for the same resources.

[quote name='tbirnseth' timestamp='1321683254' post='126294']

Don't know… “Hybrid VPS” is a marketing term.

Just remember you are sharing hardware. So it's all about how much the host loads the hardware and how many of the VPS's contend for the same resources.

[/quote]



anyone knowhow i can verify shmem cache is working. how will i know?

At futurehosting the difference is every vps is guaranteed a certain amount of cpu’s and memory. So you will always have ie 1 or 2 full cpu’s to your disposal. So they might have 8 vps’s on one machine instead of 20 or 30.



I’m gonna try shmem this week :)

@Flow - But there are resources that are shared. I.e. Disk controllers and ethernet controllers. Since they are the slowest devices on the system, they have the greatest contention. Unfortunately, with a VPS you do not have your own file cache nor your own network IO cache so you have very little control over how responsive the underlying file/network I/O is.



Almost every website will be IO bound, not CPU/memory bound. And yes, memory can also become IO bound if it is not physical memory you are partitioning. But remember too, that you have to run the OS in the memory you've been allocated. An iostat can reveal some of what's going on.



@Sole - If you're not seeing any errors, then it's working. Check your PHP error log just to be sure.

Learning every day! So basically, this is true, but won’t really matter? :)



Hybrid VPSs are different than standard VPSs as you have a significant boost of CPU with an average of 1 CPU core per VPS. There will never be more than 8 VPSs per node, thus allowing 1 core per VPS. This also increases your disk IO availability as well as overall performance with added RAM, disk space, upgraded network ports, etc.

Disagree. You still have a single (usually) disk controller and a single (usually) network adapter. Those physical devices will all be shared by whatever activity is happening on that box. Even if you had multiples, neither are assigned exclusively to your VPS. Hence you are sharing hardware resources and depending on the other VPS's on the system, could be contentious or not. The only advantage is that you probably have your own instance of myslqd rather than it being shared across the VPSs…



Almost all IO on any system is shared among the services being provided on that system regardless of what the operating system environment is (VPS, non-VPS, Windoz, Linux, Unix, VMS, etc.) The only thing a VPS can really partition for you is physical and virtual RAM and CPU.

[quote name=‘Flow’ timestamp=‘1321815199’ post=‘126369’]

At futurehosting the difference is every vps is guaranteed a certain amount of cpu’s and memory. So you will always have ie 1 or 2 full cpu’s to your disposal. So they might have 8 vps’s on one machine instead of 20 or 30.



I’m gonna try shmem this week :)

[/quote]





let me know how shmem works for you flow. this whole vps /vps hybrid seems confusing.