Hi everyone,


I am migrating from apache2 prefork and mod_php to apache2-mpm with a
mod_fcgid or mod_fastcgi + php-fpm solution and I am looking for
something I can't find : Limiting the total number of PHP Processes
that are spawn.


The goal of this limit is - of course - preventing hitting RAM
limits.

So my question is : with mod_fastcgi, mod_fcgid or mod_fastcgi + php-
fpm, is there a directive to globally limit the total number of PHP
processes ?

I found the max_children directives but there are for one pool (for
php-fpm) or for one wrapper (for mod_fcgid) but I need one pool / one
wrapper by vhost because I am dealing with shared hosting :)



Thanks in advance.



PS : Sorry if I am not clear, I'm not an english-native :)
Tom Boutell
Re: Global limit of number of processes in php-fpm or FastCGI
September 11, 2010 12:56PM
In what way would you limit the global total number of processes? Who
loses if there are too many?

If I were your customer I'd be asking for VM hosting at this point,
where I'm guaranteed a certain amount of memory and have a pool of PHP
processes just for me that won't shrink.

On Fri, Sep 10, 2010 at 5:19 PM, Troll <trollofdarkness@gmail.com> wrote:
> Hi everyone,
>
>
> I am migrating from apache2 prefork  and mod_php to apache2-mpm with a
> mod_fcgid or mod_fastcgi + php-fpm solution and I am looking for
> something I can't find : Limiting the total number of PHP Processes
> that are spawn.
>
>
> The goal of this limit is - of course - preventing hitting RAM
> limits.
>
> So my question is : with mod_fastcgi, mod_fcgid or mod_fastcgi + php-
> fpm, is there a directive to globally limit the total number of PHP
> processes ?
>
> I found the max_children directives but there are for one pool (for
> php-fpm) or for one wrapper (for mod_fcgid) but I need one pool / one
> wrapper by vhost because I am dealing with shared hosting :)
>
>
>
> Thanks in advance.
>
>
>
> PS : Sorry if I am not clear, I'm not an english-native :)
>



--
Tom Boutell
P'unk Avenue
215 755 1330
punkave.com
window.punkave.com
I want a simple limit : the total number of php processes that are
launched, among all pools, won't become higher than a specific number.

Imagine you have 50 clients with one pool by client and you have 1Go
RAM. Even if we use the patch that's being finalized to have
start_servers=0, if every client launches 1 or 2 processes, there are a
hundred PHP processes launched ! Imagine 15Mo by process : 15*100 = 1
500 Mo of RAM used by PHP processes !

Whereas, if I can limit the global number of PHP processes launched, I
can set a idle timeout enough low to permit, in case of high load, to
make the resources shared between all the vhosts.

Is it clearer ?


Thanks in advance :)

Troll

On 09/11/2010 06:54 PM, Tom Boutell wrote:
> In what way would you limit the global total number of processes? Who
> loses if there are too many?
>
> If I were your customer I'd be asking for VM hosting at this point,
> where I'm guaranteed a certain amount of memory and have a pool of PHP
> processes just for me that won't shrink.
>
> On Fri, Sep 10, 2010 at 5:19 PM, Troll<trollofdarkness@gmail.com> wrote:
>> Hi everyone,
>>
>>
>> I am migrating from apache2 prefork and mod_php to apache2-mpm with a
>> mod_fcgid or mod_fastcgi + php-fpm solution and I am looking for
>> something I can't find : Limiting the total number of PHP Processes
>> that are spawn.
>>
>>
>> The goal of this limit is - of course - preventing hitting RAM
>> limits.
>>
>> So my question is : with mod_fastcgi, mod_fcgid or mod_fastcgi + php-
>> fpm, is there a directive to globally limit the total number of PHP
>> processes ?
>>
>> I found the max_children directives but there are for one pool (for
>> php-fpm) or for one wrapper (for mod_fcgid) but I need one pool / one
>> wrapper by vhost because I am dealing with shared hosting :)
>>
>>
>>
>> Thanks in advance.
>>
>>
>>
>> PS : Sorry if I am not clear, I'm not an english-native :)
>>
>
>
Maciej Lisiewski
Re: Global limit of number of processes in php-fpm or FastCGI
September 12, 2010 06:06AM
> I want a simple limit : the total number of php processes that are
> launched, among all pools, won't become higher than a specific number.
>
> Imagine you have 50 clients with one pool by client and you have 1Go
> RAM. Even if we use the patch that's being finalized to have
> start_servers=0, if every client launches 1 or 2 processes, there are a
> hundred PHP processes launched ! Imagine 15Mo by process : 15*100 = 1
> 500 Mo of RAM used by PHP processes !
>
> Whereas, if I can limit the global number of PHP processes launched, I
> can set a idle timeout enough low to permit, in case of high load, to
> make the resources shared between all the vhosts.
>
> Is it clearer ?
>

Actually that's not a bad idea - it would work as poor mans burst
resources on VPS - but it needs more thinking:
What happens when the limit has been reached and one of the pools wants
to start a new process?
1. Do we kill one of the ones already running
2. Do we disallow starting new process

if 1 how do we determine which process to kill
if 2 how do we prevent one pool from hogging all the resources just
because it was first to take them? Do we just wait for child process to
die on max requests?

I think we would need a combination of both + to be able to set
priorities for pools. Going over the set number of children would reduce
priority in geometric progression.
If starting a new process would have lower priority than killing one
from the lowest priority pool (different pools ofc) we disallow it,
otherwise we kill it to make room for the new one. We would need to have
minimum child life set as well (time, requests?) to prevent 2 pools
killing/starting new child every other request.



--
Maciej Lisiewski
On 09/12/2010 12:04 PM, Maciej Lisiewski wrote:
>> I want a simple limit : the total number of php processes that are
>> launched, among all pools, won't become higher than a specific number.
>>
>> Imagine you have 50 clients with one pool by client and you have 1Go
>> RAM. Even if we use the patch that's being finalized to have
>> start_servers=0, if every client launches 1 or 2 processes, there are a
>> hundred PHP processes launched ! Imagine 15Mo by process : 15*100 = 1
>> 500 Mo of RAM used by PHP processes !
>>
>> Whereas, if I can limit the global number of PHP processes launched, I
>> can set a idle timeout enough low to permit, in case of high load, to
>> make the resources shared between all the vhosts.
>>
>> Is it clearer ?
>>
>
> Actually that's not a bad idea - it would work as poor mans burst
> resources on VPS - but it needs more thinking:
> What happens when the limit has been reached and one of the pools wants
> to start a new process?
> 1. Do we kill one of the ones already running
> 2. Do we disallow starting new process
>
> if 1 how do we determine which process to kill
> if 2 how do we prevent one pool from hogging all the resources just
> because it was first to take them? Do we just wait for child process to
> die on max requests?
>
> I think we would need a combination of both + to be able to set
> priorities for pools. Going over the set number of children would reduce
> priority in geometric progression.
> If starting a new process would have lower priority than killing one from
> the lowest priority pool (different pools ofc) we disallow it, otherwise
> we kill it to make room for the new one. We would need to have minimum
> child life set as well (time, requests?) to prevent 2 pools
> killing/starting new child every other request.
>
>
>
What I'll be looking into soon (as time permits) will be to extend the
"ondemand" patch so that it can spawn worker processes with different
uids/gids. For the first iteration I plan to force max_requests to 1 so
that children cannot be re-used for obvious reasons. Together with the
"ondemand" features this would mean you could set up one pool with
start_servers=0 and when a new connection happens a new child is spawned,
assumes the required uid/gid (determined by the uid/gid of the script and
it's root path), processes the request and then terminates.
This should make it possible to handle multiple users with one pool and
max_children basically becomes the global process limit for fpm.

Regards,
Dennis
Re: Global limit of number of processes in php-fpm or FastCGI
September 12, 2010 08:12AM
Yeah. One pool could hog a bunch of resources on a spike, and starve other pools. While I see a reasons of the idea of a global the implementation seems too messy for me. I'd rather define pools with strict quotas - and the total of max children together should equal your global limit you want. Then adjust children as needed if you got the original numbers wrong.

On Sep 12, 2010, at 3:04 AM, Maciej Lisiewski <maciej.lisiewski@gmail.com> wrote:

>> I want a simple limit : the total number of php processes that are
>> launched, among all pools, won't become higher than a specific number.
>>
>> Imagine you have 50 clients with one pool by client and you have 1Go
>> RAM. Even if we use the patch that's being finalized to have
>> start_servers=0, if every client launches 1 or 2 processes, there are a
>> hundred PHP processes launched ! Imagine 15Mo by process : 15*100 = 1
>> 500 Mo of RAM used by PHP processes !
>>
>> Whereas, if I can limit the global number of PHP processes launched, I
>> can set a idle timeout enough low to permit, in case of high load, to
>> make the resources shared between all the vhosts.
>>
>> Is it clearer ?
>>
>
> Actually that's not a bad idea - it would work as poor mans burst resources on VPS - but it needs more thinking:
> What happens when the limit has been reached and one of the pools wants to start a new process?
> 1. Do we kill one of the ones already running
> 2. Do we disallow starting new process
>
> if 1 how do we determine which process to kill
> if 2 how do we prevent one pool from hogging all the resources just because it was first to take them? Do we just wait for child process to die on max requests?
>
> I think we would need a combination of both + to be able to set priorities for pools. Going over the set number of children would reduce priority in geometric progression.
> If starting a new process would have lower priority than killing one from the lowest priority pool (different pools ofc) we disallow it, otherwise we kill it to make room for the new one. We would need to have minimum child life set as well (time, requests?) to prevent 2 pools killing/starting new child every other request.
>
>
>
> --
> Maciej Lisiewski
Maciej Lisiewski
Re: Global limit of number of processes in php-fpm or FastCGI
September 12, 2010 09:10AM
> What I'll be looking into soon (as time permits) will be to extend the
> "ondemand" patch so that it can spawn worker processes with different
> uids/gids. For the first iteration I plan to force max_requests to 1 so
> that children cannot be re-used for obvious reasons. Together with the
> "ondemand" features this would mean you could set up one pool with
> start_servers=0 and when a new connection happens a new child is
> spawned, assumes the required uid/gid (determined by the uid/gid of the
> script and it's root path), processes the request and then terminates.
> This should make it possible to handle multiple users with one pool and
> max_children basically becomes the global process limit for fpm.

That would case a child process to be spawned everytime there is a
request - far from optimal. I think it would be better idea if we kept
track total child processes count across different pools and put a hard
cap there.
I'd rather have pools with low StartServers and MinSpareServers (0 for
extreme cases), while keeping max_children high to accommodate for
traffic spikes + have a safeguard capping total child processes across
pools so the server wouldn't die of swapping ;-)


--
Maciej Lisiewski
Re: Global limit of number of processes in php-fpm or FastCGI
September 12, 2010 09:12AM
On Sun, Sep 12, 2010 at 6:07 AM, Maciej Lisiewski
<maciej.lisiewski@gmail.com> wrote:

> That would case a child process to be spawned everytime there is a request -
> far from optimal.

this is what mod_suphp does for apache, sadly.
On 09/12/2010 03:07 PM, Maciej Lisiewski wrote:
>> What I'll be looking into soon (as time permits) will be to extend the
>> "ondemand" patch so that it can spawn worker processes with different
>> uids/gids. For the first iteration I plan to force max_requests to 1 so
>> that children cannot be re-used for obvious reasons. Together with the
>> "ondemand" features this would mean you could set up one pool with
>> start_servers=0 and when a new connection happens a new child is
>> spawned, assumes the required uid/gid (determined by the uid/gid of the
>> script and it's root path), processes the request and then terminates.
>> This should make it possible to handle multiple users with one pool and
>> max_children basically becomes the global process limit for fpm.
>
> That would case a child process to be spawned everytime there is a
> request - far from optimal. I think it would be better idea if we kept
> track total child processes count across different pools and put a hard
> cap there.
> I'd rather have pools with low StartServers and MinSpareServers (0 for
> extreme cases), while keeping max_children high to accommodate for
> traffic spikes + have a safeguard capping total child processes across
> pools so the server wouldn't die of swapping ;-)
>
>
What I'm working towards is not limited to the functionality you are
looking and has a different motivation. I was only mentioning it because it
also could be a potential option for this particular use-case too.
As for the "ondemand" spawning there might be ways to get around that. Has
anybody quantified the "far from optimal" theorie so far? What I mean is
that it's not clear how much of an impact this really has on performance.
It would be good to get some actual numbers on that.
You might get around this ondemand spawning though since the fork() of the
child happens first before the setuid() call. What you could do is
pre-spawn worker processes and keep them around with a uid of 0 and when
the request comes in have it drop to another uid then and there.
Technically you could even get rid of the max_requests=1 restriction if
only the main process would listen for connections and keep a list of
children and their current uid. Then the connection could be passed to one
of the already spawned children with the appropriate uid but that is much
more complex than the other stuff.

Regards,
Dennis
Maciej Lisiewski
Re: Global limit of number of processes in php-fpm or FastCGI
September 12, 2010 11:58AM
> What I'm working towards is not limited to the functionality you are
> looking and has a different motivation. I was only mentioning it because
> it also could be a potential option for this particular use-case too.

Great :-)
Where can I find some details?

> As for the "ondemand" spawning there might be ways to get around that.
> Has anybody quantified the "far from optimal" theorie so far? What I
> mean is that it's not clear how much of an impact this really has on
> performance. It would be good to get some actual numbers on that.

Spawning a child per request would result in performance similar to
running php as cgi (not fastcgi). So we are talking about an order of
magnitude here.

> You might get around this ondemand spawning though since the fork() of
> the child happens first before the setuid() call. What you could do is
> pre-spawn worker processes and keep them around with a uid of 0 and when
> the request comes in have it drop to another uid then and there.

So this would work in a similar way to suPHP. What about shared memory -
I'm concerned mostly about opcode cache - without it we're getting
another massive performance hit - 80%+.

> Technically you could even get rid of the max_requests=1 restriction if
> only the main process would listen for connections and keep a list of
> children and their current uid. Then the connection could be passed to
> one of the already spawned children with the appropriate uid but that is
> much more complex than the other stuff.

Unless you can go around this you will have serious performance issues.


--
Maciej Lisiewski
> Great :-)
> Where can I find some details?
>
> > As for the "ondemand" spawning there might be ways to get around that.
> > Has anybody quantified the "far from optimal" theorie so far? What I
> > mean is that it's not clear how much of an impact this really has on
> > performance. It would be good to get some actual numbers on that.
>
> Spawning a child per request would result in performance similar to
> running php as cgi (not fastcgi).
>

I don't think so, maybe I am wrong but if there are another part of
the pool (other processes) already loaded, forking a new child is
normally far from similar to launching a new php-cgi. The environment
is already loaded in memory.

> Michael Shadle wrote :
> > That would case a child process to be spawned everytime there is a request -
> > far from optimal.
> this is what mod_suphp does for apache, sadly.

Yes, in the extreme case this would cause one child per request, and
would be similar to mod_php, finally, so not the worst.


Troll
On Sep 12, 12:04 pm, Maciej Lisiewski <maciej.lisiew...@gmail.com>
wrote:
> > I want a simple limit : the total number of php processes that are
> > launched, among all pools, won't become higher than a specific number.
>
> > Imagine you have 50 clients with one pool by client and you have 1Go
> > RAM. Even if we use the patch that's being finalized to have
> > start_servers=0, if every client launches 1 or 2 processes, there are a
> > hundred PHP processes launched ! Imagine 15Mo by process : 15*100 = 1
> > 500 Mo of RAM used by PHP processes !
>
> > Whereas, if I can limit the global number of PHP processes launched, I
> > can set a idle timeout enough low to permit, in case of high load, to
> > make the resources shared between all the vhosts.
>
> > Is it clearer ?
>
> Actually that's not a bad idea - it would work as poor mans burst
> resources on VPS - but it needs more thinking:
> What happens when the limit has been reached and one of the pools wants
> to start a new process?
> 1. Do we kill one of the ones already running
> 2. Do we disallow starting new process
>
> if 1 how do we determine which process to kill
> if 2 how do we prevent one pool from hogging all the resources just
> because it was first to take them? Do we just wait for child process to
> die on max requests?
>
> I think we would need a combination of both + to be able to set
> priorities for pools. Going over the set number of children would reduce
> priority in geometric progression.
> If starting a new process would have lower priority than killing one
> from the lowest priority pool (different pools ofc) we disallow it,
> otherwise we kill it to make room for the new one. We would need to have
> minimum child life set as well (time, requests?) to prevent 2 pools
> killing/starting new child every other request.
>
> --
> Maciej Lisiewski

For the case of a pool bunching all the resources, the idea is keeping
a idle timeout enough low (this "timeout" is introduced with the
"ondemand" patch) so that other pools won't wait too long time before
they can create a child for themselves.

But after all, when you are using apache and mod_php on a shared
environnment, if a vhost is eating all the resources because there is
a load's spike on it, others vhosts will just wait in the queue for
getting an apache process... the worst scenario for implementing a
global maximum of PHP processes is just like the MaxClient directive
of apache and is juste like using apache2 with mod_php... And this is,
for the moment, the widely selected solutions (mod_php) on shared
environments.



Troll
On Sep 12, 3:07 pm, Maciej Lisiewski <maciej.lisiew...@gmail.com>
wrote:
> I'd rather have pools with low StartServers and MinSpareServers (0 for
> extreme cases), while keeping max_children high to accommodate for
> traffic spikes + have a safeguard capping total child processes across
> pools so the server wouldn't die of swapping ;-)
>

I am very interessed in the way you can do this "safeguard", what are
you using ?


Troll
On 09/12/2010 05:56 PM, Maciej Lisiewski wrote:
>> What I'm working towards is not limited to the functionality you are
>> looking and has a different motivation. I was only mentioning it because
>> it also could be a potential option for this particular use-case too.
>
> Great :-)
> Where can I find some details?
>
I'll post something here as soon as I find some time to come up with a
proof-of-concept patch. This patch will require the "ondemand" patch from
http://bugs.php.net/52569 to be applied first and will only be a minimal
implementation to check if things really work as I hope they do. Things can
then be enhanced from there.

>> As for the "ondemand" spawning there might be ways to get around that.
>> Has anybody quantified the "far from optimal" theorie so far? What I
>> mean is that it's not clear how much of an impact this really has on
>> performance. It would be good to get some actual numbers on that.
>
> Spawning a child per request would result in performance similar to
> running php as cgi (not fastcgi). So we are talking about an order of
> magnitude here.
>
This would not spawn new php processes completely from scratch but only
fork() them from the main process as fpm already does.
>> You might get around this ondemand spawning though since the fork() of
>> the child happens first before the setuid() call. What you could do is
>> pre-spawn worker processes and keep them around with a uid of 0 and when
>> the request comes in have it drop to another uid then and there.
>
> So this would work in a similar way to suPHP. What about shared memory -
> I'm concerned mostly about opcode cache - without it we're getting
> another massive performance hit - 80%+.
>
I haven't used suPHP yet but if I understand the docs correctly then suPHP
actually launches a completely new php instance for every request. What I
plan to do is to only modify the fpm forking a little bit so that the
opcode caches should work no differently than with the "regular" fpm.
>> Technically you could even get rid of the max_requests=1 restriction if
>> only the main process would listen for connections and keep a list of
>> children and their current uid. Then the connection could be passed to
>> one of the already spawned children with the appropriate uid but that is
>> much more complex than the other stuff.
>
> Unless you can go around this you will have serious performance issues.
>
>
This statement is both very vague and very general. Forking vs. starting a
new executable for example will make a difference in performance as will
the ability to use an opcode cache. The ability to pre-spawn processes and
then simply drop privileges when a request comes in will also make a
difference.
There will naturally be some performance loss (unless someone manages to
implement this with zero cost) but I don't think it is useful to
automatically assume that the impact will always be terrible and for all
use-cases.

Regards,
Dennis
Maciej Lisiewski
Re: Global limit of number of processes in php-fpm or FastCGI
September 12, 2010 07:24PM
> I am very interessed in the way you can do this "safeguard", what are
> you using ?

I was talking about something I would like to have, not something that
already exists.

--
Maciej Lisiewski
Re: Global limit of number of processes in php-fpm or FastCGI
September 12, 2010 07:36PM
On Sun, Sep 12, 2010 at 4:07 PM, Dennis J. <djacobfeuerborn@gmail.com> wrote:

> I haven't used suPHP yet but if I understand the docs correctly then suPHP
> actually launches a completely new php instance for every request. What I
> plan to do is to only modify the fpm forking a little bit so that the opcode
> caches should work no differently than with the "regular" fpm.

From what I could tell when I used it (low traffic site it was cool,
then I tried it on my hosting cluster and it couldn't handle the load)
it launches a new php-cgi process with the desired privileges for
every single request.

> This statement is both very vague and very general. Forking vs. starting a
> new executable for example will make a difference in performance as will the
> ability to use an opcode cache. The ability to pre-spawn processes and then
> simply drop privileges when a request comes in will also make a difference.
> There will naturally be some performance loss (unless someone manages to
> implement this with zero cost) but I don't think it is useful to
> automatically assume that the impact will always be terrible and for all
> use-cases.

My understanding of ondemand is that it will fork() - but with the
privilege thing in the mix, that will complicate things. I don't know
enough about process management and such, but I thought you couldn't
fork() and then setgid/setuid type operations on it you can only do
that on a new physical process. (We don't need to start a discussion
on that - you and Jerome seem to be quite well-versed in all of that,
excuse my possibly incorrect terms there :))
Maciej Lisiewski
Re: Global limit of number of processes in php-fpm or FastCGI
September 12, 2010 07:54PM
> I'll post something here as soon as I find some time to come up with a
> proof-of-concept patch. This patch will require the "ondemand" patch
> from http://bugs.php.net/52569 to be applied first and will only be a
> minimal implementation to check if things really work as I hope they do.
> Things can then be enhanced from there.

OK, I read that "bug report"

> This would not spawn new php processes completely from scratch but only
> fork() them from the main process as fpm already does.

OK, there shouldn't be any performance issues - if the numbers in your
2010-08-30 post are right it's just a few ms per process.

> I haven't used suPHP yet but if I understand the docs correctly then
> suPHP actually launches a completely new php instance for every request.
> What I plan to do is to only modify the fpm forking a little bit so that
> the opcode caches should work no differently than with the "regular" fpm.

I was thinking about preforked child processes - afaik opcode cache is
per-pool. Also if the default state is 0 processes running in pool that
automatically means no opcode cache.
This of course will be easy to go around when xcache finally will allow
storing opcode cache on disk to prime cache the next time php is
started, but xcache development has almost stopped and this feature has
been in planning for years now...

> This statement is both very vague and very general. Forking vs. starting
> a new executable for example will make a difference in performance as
> will the ability to use an opcode cache. The ability to pre-spawn
> processes and then simply drop privileges when a request comes in will
> also make a difference.

Well, If new processes would get spawned and not forked as I initially
thought we'd be getting 10-20 times worse performance. Not being able to
use opcode cache significantly (5-10 times) reduces performance as well.

> There will naturally be some performance loss (unless someone manages to
> implement this with zero cost) but I don't think it is useful to
> automatically assume that the impact will always be terrible and for all
> use-cases.

True. My goal was to point out possible problems that need to be taken
into account.

Ondemand patch is good for shared host with many (possibly hundreds)
accounts (pools) with little traffic.

Global limit would be more useful with few accounts (pools) - 10-20:
- if at least 1 child remained for each pool at all times we'd get
opcode cache
- should a traffic spike hit one of the pools it could canibalize other
pools' resources leaving them still operational
- if the processes stayed alive long enough there would be no need for
prefork
Basically you could set each pool with low maxSpareServers, and high
max_children - as high as 90% of server resources for example - and not
worry about server getting into swapping downwards spiral.


--
Maciej Lisiewski
On 09/13/2010 01:51 AM, Maciej Lisiewski wrote:
> Ondemand patch is good for shared host with many (possibly hundreds)
> accounts (pools) with little traffic.
>
> Global limit would be more useful with few accounts (pools) - 10-20:
> - if at least 1 child remained for each pool at all times we'd get
> opcode cache
> - should a traffic spike hit one of the pools it could canibalize
> other pools' resources leaving them still operational
> - if the processes stayed alive long enough there would be no need for
> prefork
> Basically you could set each pool with low maxSpareServers, and high
> max_children - as high as 90% of server resources for example - and
> not worry about server getting into swapping downwards spiral.
>
>
Yes, it's completely what I am trying to get :)

I think we can do that easily by just making a limitation on the number
of processes (a additionnal condition in the fpm_children_make()
function ) and setting a max_spare_server low or/with an idle timeout
low too (with the ondemand patch).

But, finally, if I understand your answers it is for the moment not
possible, we have to custom the source code, isn't it ?

That's what I'd like to do, but for the moment I can't get php-fpm
running correctly (blocked under a number of 3 processes, with nginx,
without any modification to the source code... can't understand why)
from sources (without modification) :-/


Troll
Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 124
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 500 on July 15, 2024
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready