-
Notifications
You must be signed in to change notification settings - Fork 32
Description
Problem
When the Proxy Server receives a client request that stimulates the creation of a new stream and stream key, eventually it calls ProxyServer::request_route_and_transmit(). This is what happens then:
- Proxy Server sends a route query to Neighborhood
- Neighborhood uses the Routing Engine to find a route
- Proxy Server receives a route response from Neighborhood
- Proxy Server transmits the client request to the Hopper for delivery
- Proxy Server sends an actor message to itself containing the route response from the Neighborhood
- Proxy Server receives the route response from itself and stores it in
ProxyServer::stream_info.
This works fine as long as we complete Step 6 before the next request destined for that host arrives from the client.
However, if the client sends a flurry of requests for a particular host all at once, we might get several of them before we have the route established in ProxyServer::stream_info, in which case the Proxy Server will request a second and third and fourth route to the same host from the Neighborhood, all but the last of which will appear briefly in ProxyServer::stream_info before being overwritten (overwritten because they'll all have the same stream key, since the stream key is made from the host and port (and a bit of constant salt)).
The problem here is that once a route is finally established and the responses begin to arrive, it will be assumed that all the responses came back over the winning route, and that route's participants will be paid for their services...services that they may not have provided for the first few requests, while the providers for the first few responses are forgotten in the overwriting.
Proposed Solution
When we send a route query to the Neighborhood, we should also somehow mark that stream key (probably in ProxyServer::stream_info) as being in progress. If further queries come in while the stream key is in progress, those queries should be delayed (perhaps the Proxy Server should re-send them to itself) until the route response arrives from the Neighborhood. When that happens, the response route should immediately be set in ProxyServer::stream_info rather than having the Proxy Server send it to itself.
This will present a problem for that stream if the Neighborhood cannot find a route for it. All the further requests for that host will be caught forever whirling around from the Proxy Server to itself until the Node is shut down. There should be some way to modify the mark if no route is found (or perhaps if no response is ever received from the Neighborhood) so that if the whirling requests find that modification, they know to die after having the appropriate ServerImpersonator construct an error response for the client.
However, a solution like this might create a problem of its own. What if the Neighborhood originally can't find a route, but once a few requests are whirling around the Proxy Server, the final connections are made and now it can find a route? The requests will eventually generate ServerImpersonator error responses and die, but every subsequent request sent to that host will encounter the mark in ProxyServer::stream_info and die without making the Neighborhood try to find a route again: that host will be effectively blacklisted even though it's now available.
Maybe we'd want to replace StreamInfo::route_opt with an enum:
// Before this time, we send the request around again; after this time, we fail the request
// with ServerImpersonator and flip the value to QueryFailed
QueryInProgress(SystemTime).
// Before this time, we fail all requests for this stream key with ServerImpersonator.
// After this time, we make another request for a route to the Neighborhood and flip
// the value to QueryInProgress.
QueryFailed(SystemTime),
// A valid response has arrived from the Neighborhood; we'll use this for the life
// of the stream.
RouteInUse(RouteQueryResponse),
// This is a replacement for StreamInfo::time_to_live_opt. When the client shuts down
// a stream, but we have to wait for (and pay for) straggling CORES packages from the
// server, we can set this value with a time a few seconds into the future. The issue here
// is how to retire this stream key when its time runs out so that ProxyServer::stream_info
// doesn't fill up with these.
RouteDead(RouteQueryResponse, SystemTime)
Metadata
Metadata
Assignees
Labels
Type
Projects
Status