Had a really annoying issue recently where WCF SOAP requests were returning successfully (HTTP 200) but apparently also logging HTTP 400 “Bad Verb” errors in HTTPERR as this small extract from the log shows:
2010-03-10 09:25:18 127.0.0.1 55897 127.0.0.1 80 - - - 400 - Verb -
2010-03-10 09:25:22 127.0.0.1 55902 127.0.0.1 80 - - - 400 - Verb -
2010-03-10 09:25:29 127.0.0.1 55905 127.0.0.1 80 - - - 400 - Verb -
This issue was happening in a large SOA solution, where each WCF service (hosted in IIS) offered a simple “Heartbeat“ operation for use by a hardware load balancer for health monitoring. It was clear that the monitors were causing the issue (as requests from other clients didn’t exhibit this unusual behaviour), what was less clear was why.
The first step was to try and see what was going on, using Network Monitor I captured a trace to see the activity, an extract from a failing trace shows the SOAP protocol request and response, and then a follow up HTTP response with the 400 exception.
750 23.640625 {TCP:85, IPv4:18} 15:36:42.551 20.20.20.179 20.20.20.34 TCP TCP:Flags=......S., SrcPort=34678, DstPort=HTTP(80), PayloadLen=0, Seq=179182272, Ack=0, Win=5840 ( Negotiating scale factor 0x0 ) = 5840
751 23.640625 {TCP:85, IPv4:18} 15:36:42.551 20.20.20.34 20.20.20.179 TCP TCP:Flags=...A..S., SrcPort=HTTP(80), DstPort=34678, PayloadLen=0, Seq=776316902, Ack=179182273, Win=16384 ( Negotiated scale factor 0x0 ) = 16384
752 23.640625 {TCP:85, IPv4:18} 15:36:42.551 20.20.20.179 20.20.20.34 TCP TCP:Flags=...A...., SrcPort=34678, DstPort=HTTP(80), PayloadLen=0, Seq=179182273, Ack=776316903, Win=5840 (scale factor 0x0) = 5840
753 23.640625 {HTTP:86, TCP:85, IPv4:18} 15:36:42.551 20.20.20.179 20.20.20.34 SOAP SOAP:xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
754 23.640625 {HTTP:86, TCP:85, IPv4:18} 15:36:42.551 20.20.20.34 20.20.20.179 SOAP SOAP:xmlns:s="http://schemas.xmlsoap.org/soap/envelope/"
755 23.640625 {HTTP:86, TCP:85, IPv4:18} 15:36:42.551 20.20.20.34 20.20.20.179 HTTP HTTP:Response, HTTP/1.1, Status Code = 400, URL: /Application/Service.svc
756 23.640625 {TCP:85, IPv4:18} 15:36:42.551 20.20.20.179 20.20.20.34 TCP TCP:Flags=...A...., SrcPort=34678, DstPort=HTTP(80), PayloadLen=0, Seq=179182711, Ack=776317329, Win=6432 (scale factor 0x0) = 6432
757 23.640625 {TCP:85, IPv4:18} 15:36:42.551 20.20.20.179 20.20.20.34 TCP TCP:Flags=...A...F, SrcPort=34678, DstPort=HTTP(80), PayloadLen=0, Seq=179182711, Ack=776317494, Win=7504 (scale factor 0x0) = 7504
758 23.640625 {TCP:85, IPv4:18} 15:36:42.551 20.20.20.34 20.20.20.179 TCP TCP:Flags=...A...., SrcPort=HTTP(80), DstPort=34678, PayloadLen=0, Seq=776317494, Ack=179182712, Win=65097 (scale factor 0x0) = 65097
Knowing that HTTP.sys parses the request before handing on for processing, in this case by ASP.NET, I though I may get some joy from the ETW built in – a quick hit to google turned up some decent posts about capturing and analysing these traces from the Http.sys team. This didn’t really add a lot, but confirmed that that HTTP.sys was rejecting a request.
The load balancer's monitor was a simple send and receive over TCP, posting a send string and parsing the response to check for valid state. In order to emulate the monitor I needed to get right back to basics, avoiding all the (well appreciated) layers of abstraction and start writing directly against a Socket! A really simple bit of code, it took the send string from the load balancer:
POST /Application/Service.svc HTTP/1.1
Accept-Encoding: gzip,deflate
Content-Type: text/xml;charset=UTF-8
SOAPAction: \"http://www.company.com/product/services/service/0/1/ServiceContract/Heartbeat\"
Host:
Content-Length: 136
<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\"><soapenv:Header/><soapenv:Body/></soapenv:Envelope>
and just sent it direct to the socket
Socket socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
socket.Connect("hosting_server", 80);
socket.Send(requestBytes);
string response = string.Empty;
byte[] responseBytes = new byte[socket.ReceiveBufferSize];
int i = socket.Receive(responseBytes);
response += Encoding.UTF8.GetString(responseBytes, 0, i);
socket.Close();
Note that the host is actually null in the send string, this is allowed and documented in the RFC for HTTP1.1 section 14.23, although to be honest that was the first thing I tried. So after capturing a valid response from a .NET client that did not exhibit the issue using fiddler, comparing and then scientifically fiddling with a few values to no avail, I actually had to do some reading. The answer was actually in the spec (who would have thought!) - the header field definitions in the RFC for HTTP1.1 section 14.10 describes the Connection header, and the pertinent phrase from that section was:
HTTP/1.1 applications that do not support persistent connections MUST include the "close" connection option in every message.
So the fix was actually ludicrously easy adding “Connection: close” to the header in the load balancer send string. After so much investigation effort I honestly hoped for something a little more dramatic…