I think that this discussion touches on another question - are millisecond timings
still sufficient when monitoring web applications?
I think that in 2017, with the astounding increases in processing power we
have seen in the last decade, millisecond timings are too imprecise. The cost
of capturing a timestamp in Linux on recent hardware is about 30 nanos, and
the precision of such a timestamp is also around 30 nanos. I think that there
is a good argument to be made for exposing timestamps at the maximum level
of precison possible, rather than hiding what could be useful diagnostic data.
Are there any lans within nginx to report higher resolution timings?
Peter
> On Oct 29, 2017, at 9:35 AM, yang chen <shanchuan04@gmail.com> wrote:
>
> Thanks for your reply, why calling the ngx_event_expire_timers is unnecessary when ngx_process_events handler returns so quickly that the
> millisecond precision is not enough to display the calling time(less than
> 1ms maybe).
>
> ngx_process_events handler returns quickly which doesn't mean ngx_event_process_posted return quickly, maybe it excute for 2 ms or more
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx