Welcome! Log In Create A New Profile

Advanced

privilege separation per request

Posted by Kai Krakow 
Kai Krakow
privilege separation per request
November 24, 2010 02:50PM
We need to run requests to each vhost using separate uid/gid.
Theoretically php-fpm supports this by defining pools for each vhost.
But this is everything else but feasible. In a shared hosting system
with more than 600 distinct users I cannot create 600 pools, let alone
the configuration needed and resource overhead. And no, some ondemand
pools which support 0 processes minimum would not make me happy,
because there's still huge configuration overhead.

Looking beyond php-fpm, passenger solves this by looking at the uid/
gid of the application configuration file and switching to these for
request processing. The process then stays alive to handle the next
process. Passenger adaptively spawns processes and tears them down
within the limits set by the global pool size and per application pool
size.

I think php-fpm needs a similar architecture: Set maximum pool sizes
per user and globally, and the master process manages these. It would
be totally fine in this scenario if the last process for a uid gets
torn down, although a config option for minimum processes per uid
could be handy.

The next problem is, that php applications usually don't follow any
standard filesystem layouts so one cannot expect to read uid/gid of a
config file or other central file. But I propose it would be
sufficient to just switch to the uid/gid of the file owner being
processed. You shouldn't usually find any files belonging to a foreign
user within a vhost when the system was set up cleanly right from the
start, and a proper open basedir setting will ensure that other
scripts cannot place foreign files into directories of other vhosts.
As a safety measure php-fpm could only do uid/gid switching if the
directory is only writeable by the script owner, or only do switching
if directory and script owners match, or switch to the owner of the
DocumentRoot (if that is known at process spawn decision time), and of
course never switch to root. If switching fails due to constraints, it
would simply switch to a fallback user (www-data or nobody).

What I have in mind is a plug-and-play configuration: No need to touch
every vhost, no need to set up a huge pool of mostly useless processes
that occupy valuable resources. It should just out-of-the-box switch
to the script owner (or some other derived owner), and check a sane
set of constraints on uid switching. Look at how easy passenger does
it (and is still safe to use and very performant).

Regards,
Kai
Jonathan Langevin
Re: privilege separation per request
November 24, 2010 02:58PM
Owner of the docroot would be safest method. Considering how often and how
easily permissions/ownership gets screwed up on shared servers, I would
never rely on that as a solution for determine what user to execute as.


On Wed, Nov 24, 2010 at 11:00 AM, Kai Krakow <hurikhan77@gmail.com> wrote:

> We need to run requests to each vhost using separate uid/gid.
> Theoretically php-fpm supports this by defining pools for each vhost.
> But this is everything else but feasible. In a shared hosting system
> with more than 600 distinct users I cannot create 600 pools, let alone
> the configuration needed and resource overhead. And no, some ondemand
> pools which support 0 processes minimum would not make me happy,
> because there's still huge configuration overhead.
>
> Looking beyond php-fpm, passenger solves this by looking at the uid/
> gid of the application configuration file and switching to these for
> request processing. The process then stays alive to handle the next
> process. Passenger adaptively spawns processes and tears them down
> within the limits set by the global pool size and per application pool
> size.
>
> I think php-fpm needs a similar architecture: Set maximum pool sizes
> per user and globally, and the master process manages these. It would
> be totally fine in this scenario if the last process for a uid gets
> torn down, although a config option for minimum processes per uid
> could be handy.
>
> The next problem is, that php applications usually don't follow any
> standard filesystem layouts so one cannot expect to read uid/gid of a
> config file or other central file. But I propose it would be
> sufficient to just switch to the uid/gid of the file owner being
> processed. You shouldn't usually find any files belonging to a foreign
> user within a vhost when the system was set up cleanly right from the
> start, and a proper open basedir setting will ensure that other
> scripts cannot place foreign files into directories of other vhosts.
> As a safety measure php-fpm could only do uid/gid switching if the
> directory is only writeable by the script owner, or only do switching
> if directory and script owners match, or switch to the owner of the
> DocumentRoot (if that is known at process spawn decision time), and of
> course never switch to root. If switching fails due to constraints, it
> would simply switch to a fallback user (www-data or nobody).
>
> What I have in mind is a plug-and-play configuration: No need to touch
> every vhost, no need to set up a huge pool of mostly useless processes
> that occupy valuable resources. It should just out-of-the-box switch
> to the script owner (or some other derived owner), and check a sane
> set of constraints on uid switching. Look at how easy passenger does
> it (and is still safe to use and very performant).
>
> Regards,
> Kai
Re: privilege separation per request
November 24, 2010 03:28PM
My suggestion would be possibly docroot or possibly just configurable
definition, something like

location /home/mike {
uid 1000; gid 1000;
}

location /home/mike/somethingelse {
uid 1001; gid 1001;
}

location /foo {
uid 1054; gid 2000;
}

(or use user/group names, a lot of implementation details)

Originally I was going to say that "ondemand" is probably what you're
looking for. However, you say that's not economical. I would say why
not? It's probably scriptable and use includes for it...

I think the "instant process creation using a specific gid/uid" proved
to be too expensive and that's why the ondemand method was developed
that way. However, if it can sit around and intelligently and safely
adaptively spawn on demand I don't see a difference in performance
with changing the conditions in which it spawns. All the work being
done on "ondemand" might wind up being able to be used for something
like this (or I could be totally wrong and am making horribly
incorrect assumptions)


On Wed, Nov 24, 2010 at 11:56 AM, Jonathan Langevin <intel352@gmail.com> wrote:
> Owner of the docroot would be safest method. Considering how often and how
> easily permissions/ownership gets screwed up on shared servers, I would
> never rely on that as a solution for determine what user to execute as.
>
> On Wed, Nov 24, 2010 at 11:00 AM, Kai Krakow <hurikhan77@gmail.com> wrote:
>>
>> We need to run requests to each vhost using separate uid/gid.
>> Theoretically php-fpm supports this by defining pools for each vhost.
>> But this is everything else but feasible. In a shared hosting system
>> with more than 600 distinct users I cannot create 600 pools, let alone
>> the configuration needed and resource overhead. And no, some ondemand
>> pools which support 0 processes minimum would not make me happy,
>> because there's still huge configuration overhead.
>>
>> Looking beyond php-fpm, passenger solves this by looking at the uid/
>> gid of the application configuration file and switching to these for
>> request processing. The process then stays alive to handle the next
>> process. Passenger adaptively spawns processes and tears them down
>> within the limits set by the global pool size and per application pool
>> size.
>>
>> I think php-fpm needs a similar architecture: Set maximum pool sizes
>> per user and globally, and the master process manages these. It would
>> be totally fine in this scenario if the last process for a uid gets
>> torn down, although a config option for minimum processes per uid
>> could be handy.
>>
>> The next problem is, that php applications usually don't follow any
>> standard filesystem layouts so one cannot expect to read uid/gid of a
>> config file or other central file. But I propose it would be
>> sufficient to just switch to the uid/gid of the file owner being
>> processed. You shouldn't usually find any files belonging to a foreign
>> user within a vhost when the system was set up cleanly right from the
>> start, and a proper open basedir setting will ensure that other
>> scripts cannot place foreign files into directories of other vhosts.
>> As a safety measure php-fpm could only do uid/gid switching if the
>> directory is only writeable by the script owner, or only do switching
>> if directory and script owners match, or switch to the owner of the
>> DocumentRoot (if that is known at process spawn decision time), and of
>> course never switch to root. If switching fails due to constraints, it
>> would simply switch to a fallback user (www-data or nobody).
>>
>> What I have in mind is a plug-and-play configuration: No need to touch
>> every vhost, no need to set up a huge pool of mostly useless processes
>> that occupy valuable resources. It should just out-of-the-box switch
>> to the script owner (or some other derived owner), and check a sane
>> set of constraints on uid switching. Look at how easy passenger does
>> it (and is still safe to use and very performant).
>>
>> Regards,
>> Kai
>
Kai Krakow
Re: privilege separation per request
November 24, 2010 03:38PM
I also think that would make most sense: Determine owner of docroot,
and switch to that. I'd like to integrate my idea into the php-fpm
code but I'm note sure where to start. All stuff seems to be pretty
scattered around in the code, there's nothing identifyable as the
"process manager", nor there seem to be separate modules for the
static and dynamic manager. I hoped I could just copy the dynamic
manager as a module, modify it and integrate it into the process
manager. But "process manager" seems to be everywhere in the code. I
just don't get a grab of the architecture in this code - anyone wants
to explain? Are there docs?

Anyone here who has a better idea of the code and likes to work with
me on that?

I've seen there's code on github but it seems pretty outdated. For
development I'd like to use something separate from the main php
distribution.

As far as I understood, there's the master process which listens for
incoming fastcgi request. It then scans the non-busy process list and
passes the request to one process if it finds one, otherwise it spawns
a new one. Newly spawned processes setuid() themselves. Let's not look
at management/maintenance processing currently to keep it easy.

Using my idea and understanding, I'd need to modify this processing at
various stages:

1. On incoming request, decide which user runs this request (use some
safety measures here, fall back to configured uid), optionally use a
defined user-id from the vhost-config (if this is accessable)
2. On scanning for non-busy processes, also compare used-id
3. Spawn a new process if needed, but do not setuid() to the
configured user-id, instead use the detected uid
4. Instead of configuring pools statically at startup, use dynamic
pools at runtime - one per docroot

Looking at the code, I have the feeling this could be easier to create
from scratch - wrt php-fpm currently being designed with a complete
different approach in mind (static/semi-dynamic pools configured at
startup time).

Correct me if I'm wrong or you see serious security implications.

On 24 Nov., 20:56, Jonathan Langevin <intel...@gmail.com> wrote:
> Owner of the docroot would be safest method. Considering how often and how
> easily permissions/ownership gets screwed up on shared servers, I would
> never rely on that as a solution for determine what user to execute as.
Kai Krakow
Re: privilege separation per request
November 24, 2010 03:54PM
On 24 Nov., 21:25, Michael Shadle <mike...@gmail.com> wrote:
> Originally I was going to say that "ondemand" is probably what you're
> looking for. However, you say that's not economical. I would say why
> not? It's probably scriptable and use includes for it...

Well, scriptable by what tools? I don't like to run some sort of
"Makefile" tool which collects information from different locations,
recombine then, to just create a configuration with redundant
information. If you want to configure that, hey: Why not? Can be
handy. But usually it's enough to be derived from the owner of docroot
- automatically. No configuration involved. Administrators would love
it.

> I think the "instant process creation using a specific gid/uid" proved
> to be too expensive and that's why the ondemand method was developed
> that way. However, if it can sit around and intelligently and safely
> adaptively spawn on demand I don't see a difference in performance
> with changing the conditions in which it spawns. All the work being
> done on "ondemand" might wind up being able to be used for something
> like this (or I could be totally wrong and am making horribly
> incorrect assumptions)

I'd like to call my idea of process manager "adaptive", not
"ondemand". This process manager would spawn no children initially
(like "ondemand"). And like "ondemand" it would spawn processes but
let them autodetect their uid if it wasn't configured in some way. As
far as I understand fastcgi, the idea is to leave the process alive
after a request so it is immediatly ready for the next request. My
idea would do the same. Correct me if this is wrong.

A process CAN exit after X amount of time being idle (but I wouldn't
configure to do so). A process HAS to exit if the pool of processes
cannot serve the next request (for whatever reason, eg no more slots
free and no process runs a matching user-id). The process to be killed
can be taken from a LFU list. If all processes are busy, the request
has to be queued.

The master process can do maintenance on regular basis and kill
processes which were idle for some time, or whose user-id has too much
idling processes, or which have served too many requests, or kill
processes from LFU if there are requests on the queue. This sould be
configurable. It serves the one or other purpose (eliminate problems
of memory leaks, etc).

I don't think it adds overhead which "ondemand" wouldn't add - at
least if fastcgi works the way I imagine.

Regards,
Kai
Re: privilege separation per request
November 24, 2010 03:56PM
On Wed, Nov 24, 2010 at 12:52 PM, Kai Krakow <hurikhan77@gmail.com> wrote:
> Well, scriptable by what tools? I don't like to run some sort of
> "Makefile" tool which collects information from different locations,
> recombine then, to just create a configuration with redundant
> information. If you want to configure that, hey: Why not? Can be
> handy. But usually it's enough to be derived from the owner of docroot
> - automatically. No configuration involved. Administrators would love
> it.

run a cronjob or something to parse /etc/passwd or some other file(s),
create an .ini file for each of them...

see how i've split apart my configs here:
http://michaelshadle.com/2010/08/26/cleanest-configuration-for-the-new-php-fpm/
Kai Krakow
Re: privilege separation per request
November 24, 2010 04:12PM
On 24 Nov., 21:25, Michael Shadle <mike...@gmail.com> wrote:
> However, if it can sit around and intelligently and safely
> adaptively spawn on demand I don't see a difference in performance
> with changing the conditions in which it spawns. All the work being
> done on "ondemand" might wind up being able to be used for something
> like this (or I could be totally wrong and am making horribly
> incorrect assumptions)

Thinking about it, the most feasable way would be to integrate this
adaptive spawning (my idea) into the pool creation. Means: Not only
create processes on demand, but also create pools on demand based on
the uid to be used.

In the end, my idea is only around the idea to reduce configuration
overhead. Why configure all pools at startup? Why not just create them
on demand during runtime? If the dynamic pool manager is able to even
reduce the process count of pools to 0, we could even simplify and
never discard pools if they are unused.

The only problem I see here is that currently each pool has its own
master process while my idea needs a "master master" process to
dynamically distribute requests to the right pool. This would also
pull the pool configuration out of the per-vhost configuration, back
into the global scope. This may make some things complicated - so my
other idea (put knowledge about running uids into the process manager)
may be easier.

And back to my idea to reduce configuration overhead: I'd like to
eliminate per-user pool configuration not only from php-fpm, I also
don't want to create separate handlers per vhost configuration in the
webserver.

Currently, my apache configuration is a one-liner (thanx to
mod_fastcgi_handler) - and I want it to stay that way:

$ cat /etc/apache2/modules.d/30_php_fpm.conf
<IfDefine FASTCGI_HANDLER>
AddHandler fcgi:127.0.0.1:9000 .php .php5 .phtml
DirectoryIndex index.php index.php5 index.phtml
</IfDefine>

This test system runs only one user currently, so this is no problem.
But when we start migrating all (or most) of the domains to this new
system I don't want to touch two configuration files every time. And
on the first glance php-fpm looked like the perfect solution - until I
discovered that "per vhost uid/gid" means "create many many pools".
This is no better design than apache has, php-fpm is currently just
moving the problem from apache into an fastcgi manager without solving
it (but at least it improves apache performance and stability). Or
speaking of threaded-only webservers: There's a real improvement - per
user hosting. But it's not the perfect solution.
Jonathan Langevin
Re: privilege separation per request
November 24, 2010 04:16PM
This may be of interest to you: http://bugs.php.net/bug.php?id=52569

On Wed, Nov 24, 2010 at 4:11 PM, Kai Krakow <hurikhan77@gmail.com> wrote:

> On 24 Nov., 21:25, Michael Shadle <mike...@gmail.com> wrote:
> > However, if it can sit around and intelligently and safely
> > adaptively spawn on demand I don't see a difference in performance
> > with changing the conditions in which it spawns. All the work being
> > done on "ondemand" might wind up being able to be used for something
> > like this (or I could be totally wrong and am making horribly
> > incorrect assumptions)
>
> Thinking about it, the most feasable way would be to integrate this
> adaptive spawning (my idea) into the pool creation. Means: Not only
> create processes on demand, but also create pools on demand based on
> the uid to be used.
>
> In the end, my idea is only around the idea to reduce configuration
> overhead. Why configure all pools at startup? Why not just create them
> on demand during runtime? If the dynamic pool manager is able to even
> reduce the process count of pools to 0, we could even simplify and
> never discard pools if they are unused.
>
> The only problem I see here is that currently each pool has its own
> master process while my idea needs a "master master" process to
> dynamically distribute requests to the right pool. This would also
> pull the pool configuration out of the per-vhost configuration, back
> into the global scope. This may make some things complicated - so my
> other idea (put knowledge about running uids into the process manager)
> may be easier.
>
> And back to my idea to reduce configuration overhead: I'd like to
> eliminate per-user pool configuration not only from php-fpm, I also
> don't want to create separate handlers per vhost configuration in the
> webserver.
>
> Currently, my apache configuration is a one-liner (thanx to
> mod_fastcgi_handler) - and I want it to stay that way:
>
> $ cat /etc/apache2/modules.d/30_php_fpm.conf
> <IfDefine FASTCGI_HANDLER>
> AddHandler fcgi:127.0.0.1:9000 .php .php5 .phtml
> DirectoryIndex index.php index.php5 index.phtml
> </IfDefine>
>
> This test system runs only one user currently, so this is no problem.
> But when we start migrating all (or most) of the domains to this new
> system I don't want to touch two configuration files every time. And
> on the first glance php-fpm looked like the perfect solution - until I
> discovered that "per vhost uid/gid" means "create many many pools".
> This is no better design than apache has, php-fpm is currently just
> moving the problem from apache into an fastcgi manager without solving
> it (but at least it improves apache performance and stability). Or
> speaking of threaded-only webservers: There's a real improvement - per
> user hosting. But it's not the perfect solution.
Kai Krakow
Re: privilege separation per request
November 24, 2010 04:28PM
On 24 Nov., 21:55, Michael Shadle <mike...@gmail.com> wrote:
> run a cronjob or something to parse /etc/passwd or some other file(s),
> create an .ini file for each of them...
>
> see how i've split apart my configs here:http://michaelshadle.com/2010/08/26/cleanest-configuration-for-the-ne...

Ah, nice idea with this "include". But still I dislike the idea to
create configs after I modified other configs. It should be one fluid
process, as atomic as possible.

I'm planning to create vhosts immediatly from mysql tables**, filtered
by a server id - because this is how domains are managed currently
here. There's an existing tool in production. And it currently works
the way by scanning this table, write config files, restart apache.
This creates all sorts of havoc and hiccups from time to time. And
there's a lag between saving configuration and deployment of the very
same. Sometimes the lag is big, sometimes apache even just dies. :-(

**Note: If I find a module which does that, otherwise I stick with the
current way. I know lighttpd has such vhost modules.

So, based on my previous experience: No, I don't want that scripted.
If I need to script anything, impact should be as minimal as possible
- and your idea would actually increase impact even more (another
config file to write, another process to restart/reload).

BTW: I didn't want to sound rude - your implementation is a clever
idea I may pick up if the other idea fails.
Re: privilege separation per request
November 24, 2010 04:32PM
On Wed, Nov 24, 2010 at 1:27 PM, Kai Krakow <hurikhan77@gmail.com> wrote:

> So, based on my previous experience: No, I don't want that scripted.
> If I need to script anything, impact should be as minimal as possible
> - and your idea would actually increase impact even more (another
> config file to write, another process to restart/reload).

yeah but it's not done each request in real-time :p it's defined at
server start. also lets you create nginx vhost files and php-fpm
include files if needed.

not the best solution but it is -an- option that exists right now.
Kai Krakow
Re: privilege separation per request
November 24, 2010 04:38PM
On 24 Nov., 22:15, Jonathan Langevin <intel...@gmail.com> wrote:
> This may be of interest to you:http://bugs.php.net/bug.php?id=52569

No, that won't work from a security point of view. You need to trade
performance vs. security. That's the wrong approach. Decision of user
id must be moved into the pool management and be tracked there, and
the decision must be passed to the child to drop privileges there, or
it won't work. This will ensure security. Tracking the children's
ownership in the pool manager/master solves the performance problem
because keeping the knowledge there allows having long living
processes which can process multiple requests for the same uid so we
would also not trade off performance.
Dennis J.
Re: privilege separation per request
November 24, 2010 04:48PM
On 11/24/2010 10:35 PM, Kai Krakow wrote:
> On 24 Nov., 22:15, Jonathan Langevin<intel...@gmail.com> wrote:
>> This may be of interest to you:http://bugs.php.net/bug.php?id=52569
> No, that won't work from a security point of view. You need to trade
> performance vs. security. That's the wrong approach. Decision of user
> id must be moved into the pool management and be tracked there, and
> the decision must be passed to the child to drop privileges there, or
> it won't work. This will ensure security. Tracking the children's
> ownership in the pool manager/master solves the performance problem
> because keeping the knowledge there allows having long living
> processes which can process multiple requests for the same uid so we
> would also not trade off performance.
I've been trying to implement something like this since forever but so far
other stuff has always managed to get in the way. My use-case has the
additional constraint that the users are not part of /etc/passwd and their
user-id and docroot has to be determined dynamically from the request path.
In order to do that my plan is to match the request url against a regular
expression like this "/([a-z]+)/index.php" and then interpret the first
pattern as username and append that to the global storage for all accounts
e.g. "/web/docroots" so you end up with the filesystem path
"/web/docroot/username". Then the plan is to get the uid of that directory
and the uid of the called script e.g. "index.php" and if they match switch
the process to that uid.

Regards,
Dennis
Kai Krakow
Re: privilege separation per request
November 24, 2010 04:54PM
On 24 Nov., 22:30, Michael Shadle <mike...@gmail.com> wrote:
> On Wed, Nov 24, 2010 at 1:27 PM, Kai Krakow <hurikha...@gmail.com> wrote:
> > So, based on my previous experience: No, I don't want that scripted.
> > If I need to script anything, impact should be as minimal as possible
> > - and your idea would actually increase impact even more (another
> > config file to write, another process to restart/reload).
>
> yeah but it's not done each request in real-time :p it's defined at
> server start. also lets you create nginx vhost files and php-fpm
> include files if needed.

Actually, it doesn't really need to be "real-time". This sort of
queries and instant-configuration is cachable. An idle vhosts could
invalidate its "knowledge" after a few seconds, a busy vhost should
refresh its "knowledge" every few minutes maybe. But the point is: I
don't want to get more configuration files and more processes into the
cronjob (which - suprise surprise - I'm using currently to do exactly
that). But I think this is getting off-topic - the topic is how to get
the user-id into php-fpm without running multiple pools.

> not the best solution but it is -an- option that exists right now.

Yes, and if php-fpm is not easily patchable I probably stick with that.
Jérôme Loyet
Re: privilege separation per request
November 24, 2010 04:56PM
2010/11/24 Kai Krakow <hurikhan77@gmail.com>
>
> On 24 Nov., 22:15, Jonathan Langevin <intel...@gmail.com> wrote:
> > This may be of interest to you:http://bugs.php.net/bug.php?id=52569
>
> No, that won't work from a security point of view. You need to trade
> performance vs. security. That's the wrong approach. Decision of user
> id must be moved into the pool management and be tracked there, and
> the decision must be passed to the child to drop privileges there, or
> it won't work. This will ensure security.

This won't. Because FPM is not working this way. In FPM architecture there is:
- one process manager
- 1 to may pools
- 1 to many children per pool

Each request arrives directely to the children of the concerned pool.
That's why there is one listening socket per pool. The "load
balancing" of the requests to the children is made by the kernel (see
man socket).

The process manager is a PROCESS manager and not a REQUEST manager or
a pool manager.

This means that the process manager does not see incoming request. And
this is the main part of the security.

What you want do, si to make the process manager to see each request,
to read their content and, depending on the content, to load balance
the request to the concerned child (which has been created for the
occasion or has been cached by the process manager). In this context,
we're broking the in-place security barrier.

But we can say that the reading part will be done right and no
security hole will be found. And the process manager becomes yet
another proxy (after nginx/apache, and maybe the load balancer in
front of the web server) in the chain to deliver pages.

I really understand you need, but I really think deporting this
function into FPM is not the solution. A lot of compagnies are using
the 'scripting' method to handle this kind of problematic and it's
working just fine.

> Tracking the children's
> ownership in the pool manager/master solves the performance problem
> because keeping the knowledge there allows having long living
> processes which can process multiple requests for the same uid so we
> would also not trade off performance.
Re: privilege separation per request
November 24, 2010 05:04PM
On Wed, Nov 24, 2010 at 1:47 PM, Dennis J. <djacobfeuerborn@gmail.com> wrote:
> I've been trying to implement something like this since forever but so far
> other stuff has always managed to get in the way. My use-case has the
> additional constraint that the users are not part of /etc/passwd and their
> user-id and docroot has to be determined dynamically from the request path.
> In order to do that my plan is to match the request url against a regular
> expression like this "/([a-z]+)/index.php" and then interpret the first
> pattern as username and append that to the global storage for all accounts
> e.g. "/web/docroots" so you end up with the filesystem path
> "/web/docroot/username". Then the plan is to get the uid of that directory
> and the uid of the called script e.g. "index.php" and if they match switch
> the process to that uid.

Before discovering PHP-FPM I believe this was my idea, more or less. A
daemon to listen to all inbound requests, determine what uid/gid they
should be, execute the code as that.

Security needs to be tight and performance can be a concern.

I trust Jerome and other people working on FPM.

As he says it's not really a viable solution (however, who is to say
that just because FPM is done the current way it can't be changed?
Perhaps it could. I don't know. That's the beauty of open source
though... FPM at one point was just a pet project of Andrei's...)
Dennis J.
Re: privilege separation per request
November 24, 2010 05:10PM
On 11/24/2010 10:54 PM, Jérôme Loyet wrote:
> 2010/11/24 Kai Krakow<hurikhan77@gmail.com>
>> On 24 Nov., 22:15, Jonathan Langevin<intel...@gmail.com> wrote:
>>> This may be of interest to you:http://bugs.php.net/bug.php?id=52569
>> No, that won't work from a security point of view. You need to trade
>> performance vs. security. That's the wrong approach. Decision of user
>> id must be moved into the pool management and be tracked there, and
>> the decision must be passed to the child to drop privileges there, or
>> it won't work. This will ensure security.
> This won't. Because FPM is not working this way. In FPM architecture there is:
> - one process manager
> - 1 to may pools
> - 1 to many children per pool
>
> Each request arrives directely to the children of the concerned pool.
> That's why there is one listening socket per pool. The "load
> balancing" of the requests to the children is made by the kernel (see
> man socket).
>
> The process manager is a PROCESS manager and not a REQUEST manager or
> a pool manager.
>
> This means that the process manager does not see incoming request. And
> this is the main part of the security.
>
> What you want do, si to make the process manager to see each request,
> to read their content and, depending on the content, to load balance
> the request to the concerned child (which has been created for the
> occasion or has been cached by the process manager). In this context,
> we're broking the in-place security barrier.
>
> But we can say that the reading part will be done right and no
> security hole will be found. And the process manager becomes yet
> another proxy (after nginx/apache, and maybe the load balancer in
> front of the web server) in the chain to deliver pages.
>
> I really understand you need, but I really think deporting this
> function into FPM is not the solution. A lot of compagnies are using
> the 'scripting' method to handle this kind of problematic and it's
> working just fine.
How else do you want to address this use-case? You cannot put this proxy
case outside of FPM because then this proxy cannot manage the process
privileges of the FPM process that handles the request. How would you
technically determine the uid to run the process under outside of FPM yet
then have the FPM child run under that uid?
Also isn't "The process manager is a PROCESS manager and not a REQUEST
manager or a pool manager." being addressed in the on-demand patch
mentioned above? That patch basically has FPM running with zero children
and then spawns them as the requests come in. The one thing that needs to
be added is the switching of that process to the proper uid/gid. Then the
next step to make this perform well would be to have FPM keep track of
which child is running with a particular uid/gid so that request for that
same uid/gid could skip the creation of a new child and instead pass the
request to an existing one.

Regards,
Dennis
Kai Krakow
Re: privilege separation per request
November 24, 2010 05:30PM
Okay, you give me a better understanding of the underlying problem.
Thank you for that.

Looking at mod_passenger, it seems to solve this by creating new
sockets for each pool it creates - and directly connecting the vhost
to this newly created socket. At least when looking at "netstat -nlp"
there are lots of unix sockets listening for requests.

Wouldn't it be possible for the master process to create a new socket
if a new pool is created on demand, then fork and just close the file
handle? The file handle gets duplicated into the fork and is still
open there. Now vhost requests are connected to this fork. This
process becomes the master of its workers, spawns another process
which then does the actual request processing. This way the kernel can
still do the load balancing.

I'm just not sure if it is possible to modify apache's vhost
configuration at runtime so requests get passed to another socket.
Otherwise php-fpm has to become a thin proxy which just forwards
connections and keeps knowledge of which user-id listens on which
child.

But at least now I understand that php-fpm is of complete other
architecture and it would need another proxy layer in front of it to
do what I "dream" of.

On 24 Nov., 22:54, Jérôme Loyet <m...@fatbsd.com> wrote:
> 2010/11/24 Kai Krakow <hurikha...@gmail.com>
>
>
>
> > On 24 Nov., 22:15, Jonathan Langevin <intel...@gmail.com> wrote:
> > > This may be of interest to you:http://bugs.php.net/bug.php?id=52569
>
> > No, that won't work from a security point of view. You need to trade
> > performance vs. security. That's the wrong approach. Decision of user
> > id must be moved into the pool management and be tracked there, and
> > the decision must be passed to the child to drop privileges there, or
> > it won't work. This will ensure security.
>
> This won't. Because FPM is not working this way. In FPM architecture there is:
> - one process manager
> - 1 to may pools
> - 1 to many children per pool
>
> Each request arrives directely to the children of the concerned pool.
> That's why there is one listening socket per pool. The "load
> balancing" of the requests to the children is made by the kernel (see
> man socket).
>
> The process manager is a PROCESS manager and not a REQUEST manager or
> a pool manager.
>
> This means that the process manager does not see incoming request. And
> this is the main part of the security.
>
> What you want do, si to make the process manager to see each request,
> to read their content and, depending on the content, to load balance
> the request to the concerned child (which has been created for the
> occasion or has been cached by the process manager). In this context,
> we're broking the in-place security barrier.
>
> But we can say that the reading part will be done right and no
> security hole will be found. And the process manager becomes yet
> another proxy (after nginx/apache, and maybe the load balancer in
> front of the web server) in the chain to deliver pages.
>
> I really understand you need, but I really think deporting this
> function into FPM is not the solution. A lot of compagnies are using
> the 'scripting' method to handle this kind of problematic and it's
> working just fine.
>
>
>
>
>
>
>
> > Tracking the children's
> > ownership in the pool manager/master solves the performance problem
> > because keeping the knowledge there allows having long living
> > processes which can process multiple requests for the same uid so we
> > would also not trade off performance.
Sorry, only registered users may post in this forum.

Click here to login

Online Users

Guests: 202
Record Number of Users: 8 on April 13, 2023
Record Number of Guests: 421 on December 02, 2018
Powered by nginx      Powered by FreeBSD      PHP Powered      Powered by MariaDB      ipv6 ready