Sockets in CLOSE_WAIT state

Juli Mallett juli at clockworksquid.com
Tue May 14 09:16:35 PDT 2013


Hi Diego,

I'm sure this works, but it's not *quite* right, and will break some
kinds of connections.  Consider the case of an HTTP client which sends
a request followed by doing shutdown(SHUT_WR) on the socket, and then
the response takes some amount of time to arrive.  In this case, you
will have destroyed the connection before the response arrives.  (If
the response doesn't take a long time to arrive, you might conceivably
win some kind of race.)

Your analysis of the problem is likely completely right, though.  I
would *guess* that this is the most important part:

- The WP Client closes the connection without doing the EOS/EOS_ACK exchange.

I'm not sure I gave much though to how to handle that.  Here's what I
think is supposed to happen; maybe you can tell me whether it does
happen, and whether you think it's sufficient:

In ProxyConnector, we set up a Splice in each direction, and then link
them with a SplicePair.  The connection from the client to the server
is the incoming_splice_.

1) The incoming_splice_ is reading from its source_ stream, which is
the client's socket.
2) EventPoll is notified of an error on the socket, and
Splice::read_complete is called.
3) Splice::read_complete calls Splice::complete with error.
4) This error is propagated to SplicePair::splice_complete.
5) Because it is an error, SplicePair::splice_complete calls
right_action_->cancel() which stops the outgoing_splice_.
6) SplicePair::splice_complete then calls
ProxyConnector::splice_complete with error.
7) ProxyConnector::splice_complete calls ProxyConnector::schedule_close.
8) ProxyConnector::schedule_close proceeds to close both sockets.

There's one thing that it seems obvious to me could be broken here,
but I may have missed something, and I want to emphasize that I
suspect I didn't give adequate consideration to what we should do when
the client or server resets the connection rather than doing a nice
close.  The above 8 steps are not what I think is right, but what I
think the code should be doing, which should result in both
connections being closed.  I think we need to handle errors *within*
the pipes, in the same way we do for EOS.  That way we can shut down
our connections internally with a nice 3-way close, while also
ensuring that they are shut down.

What seems most likely is that I got something wrong in EventPoll.
Are you using epoll?  It seems likely that the code I have for epoll
could be wrong, and that could also be why I haven't seen this (or at
least haven't seen this as much) on systems using kqueue.  (It could
also be that I haven't used WANProxy enough in the right situations to
have seen it.)

Specifically, it seems to me that EPOLLERR isn't given priority over
EPOLLIN and EPOLLOUT, which it probably should be.  I don't know if
epoll does what I'd hope here (and doesn't set EPOLLIN and EPOLLOUT if
EPOLLERR is set), but this could certainly throw everything else off.
But even then, when we do the actual read, *that* should cause and
propagate an error, so maybe that's not so likely.

If you turn on verbose logging so that you're seeing all the DEBUG
messages, what do you see logged if a client resets the connection?
Does this _really_ only happen if a client resets the connection at
the same time a server is shutting down cleanly?  If so, then I guess
I'd like to know which connection it is that's hanging around in
CLOSE_WAIT — is it the connection from the client to the proxy, from
the proxy to the other proxy, or from the other proxy to the server?
(Or many of them?  Or all of them?)  I'm guessing that it's not all of
them, and wondering if maybe it's just the connection between the
proxies?  It still seems to me that any error should cause everything
to be shut down, though.

As you rightly notice, we don't handle the case of a premature hard
end-of-stream very well in the decoder.

What about only doing decoder_error in the "Decoder received EOS after
sending EOS." case in the original code?  Even that I'm not sure is
*right*, but I'm wondering if that's sufficient.  It's a more
conservative change, and if that doesn't work, then that would seem to
suggest to me that something else is broken.  If that does work,
there's probably a way to be even more conservative then, so that:

If we're in a situation where we have received a TCP-level EOS to the
decoder, then we obviously can't decode any more data.  If we try to
send a friendly EOS or EOS_ACK or whatever to that same proxy, the
other side obviously can't respond in that circumstance, since it has
already shut down its socket for writing.  In that case, we then
obviously need to error out rather than trying to do things nicely,
which could result in a hang if we're waiting for a response that will
never come.  Does that make sense?

I think we do, also, need some better way to send an error to a pipe,
so that in this case the XCodec could still try to tidy things up, but
within some set of constraints, and while ultimately causing a reset
at the other side of the connection.

Thanks for all your work investigating this; it sounds like you'd
actually figured out most of this back in April, but I didn't read
your message sufficiently to understand it well.  Thanks for
continuing to look into it and work on a fix.  You've definitely found
a real problem!

Thanks,
Juli.

On Tue, May 14, 2013 at 5:24 AM, Diego Woitasen <diego at woitasen.com.ar> wrote:
> I found the issue and the solution I think.
>
> THere is the commit in my repo:
> https://github.com/diegows/wanproxy/commit/b77d0c76eb6fc31588cc85ceec1cb6d65b4ab9d4
>
> decoder_error() should be called always if an error condition is
> detected in the decoder to be sure that all resources all released,
> specially the sockets :)
>
> Regards,
>   Diego
>
> On Wed, Apr 24, 2013 at 8:41 PM, Diego Woitasen <diego at woitasen.com.ar> wrote:
>> On Wed, Jan 2, 2013 at 11:24 PM, Juli Mallett <juli at clockworksquid.com> wrote:
>>> If you can reproduce it with a single connection, it should be very
>>> easy to track down with logging statements.  I would guess that it's
>>> one of two things:
>>>
>>> (1) the polling mechanism isn't telling the upper layers when the
>>> remote side has closed the connection, and so the upper layers don't
>>> realize they need to close the connection
>>> (2) could be a bug in the zlib or xcodec pipes causing them to not
>>> generate EOS and make the proxy close the connection.
>>>
>>> There's other possibilities, but I'd start by instrumenting as much as
>>> possible to figure out if anything is reacting to the FIN, and whether
>>> anything should be reacting to the FIN.
>>>
>>> On Wed, Jan 2, 2013 at 5:04 PM, Diego Woitasen <diego at woitasen.com.ar> wrote:
>>>> Hi,
>>>>  I was testing Wanproxy for HTTP (with Squid behing the Wanproxy
>>>> Server) and I'm running out of file descriptors frequently. In the
>>>> server side, netstat shows a lot of CLOSE_WAIT sockets. This looks
>>>> like a bug in the code, which is not closing all the sockets properly.
>>>>
>>>>
>>>>  Any hint to find where the bug is?
>>>>
>>>> Regards,
>>>>  Diego
>>>>
>>>> --
>>>> Diego Woitasen
>>>> _______________________________________________
>>>> wanproxy mailing list
>>>> wanproxy at lists.wanproxy.org
>>>> http://lists.wanproxy.org/listinfo.cgi/wanproxy-wanproxy.org
>>
>> Hi,
>>   It's been a long time. After few months I started again to debug
>> this bug trying to fix it. I was able to reproduce it with release
>> 0.8.0 and the latest trunk (today checkout).
>>
>>   The only way that I found to reproduce the issue easily is under
>> this configuration:
>>
>> wget -> wp client -> wp-server -> squid
>>
>> and then executing:
>>
>> wget --max-redirect=0 www.clarin.com.ar
>>
>> Using wp client as proxy. It fails 99% of the times.
>>
>> I couldn't reproduce it using netcat and haven't had time to write a
>> test case, but with Squid using default config and wget it's very easy
>> to reproduce. The problem appears when Squid and Wget tear down the
>> connection at almost the same time. If you try it with other pages
>> sometimes fails, sometimes not.
>>
>> I went deep in to the code and I think that I found two bugs related
>> with this problem:
>>
>> 1- WP signalling isn't working properly. When the socket appears in
>> the CLOSE_WAIT state I see the EOS msg from the server to the client
>> and the EOS_ACK from the client to the server. When the socket is
>> closed properly, I see the EOS and the EOS_ACK in both ways. (what's
>> the right behavior? I think the latest one).
>>
>> 2- When the problem appears, the client closes the socket, it sends
>> the FIN,ACK and ACK just after the EOS/EOS_ACK exchange. The server
>> isn't detecting that the socket was closed because a read is never
>> scheduled again from the socket. I undertand that signalling is not
>> working fine, but WP should handle this case.
>>
>> I'll continue to debug and to try to fix this issue in the next days.
>> Hints are welcome.
>>
>> Regards,
>>   Diego
>>
>> --
>> Diego Woitasen
>
>
>
> --
> Diego Woitasen



More information about the wanproxy mailing list