[SDL] [Framework] Event-based Networking
grimfang4 at gmail.com
Wed Mar 20 12:51:49 PDT 2013
This could be useful, actually:
On Wed, Mar 20, 2013 at 3:21 PM, Nathaniel J Fries <nfries88 at yahoo.com>wrote:
> Glocke wrote:
> Nathaniel J Fries wrote:
> The thread-per-client model is a bad model for any highly-interactive
> program. Like I said, synchronization bottlenecks. The key is to do as
> little synchronizing as possible; which means using a specific thread for a
> specific task and minimizing thread interaction.
> My current approach is:
> - n clients are handled by 2*n worker threads (one for sending, one for
> receiving) on the server
> - each worker-thread does access either the worker's outgoing queue (for
> popping events to send) or the server's incomming queue (to enqueue new
> events to a main queue)
> - the later model will pop events from the server's incomming queue (that
> means events that arrived from different workers) and handle it
> - the model also will push events (so-to-say "answers") to the worker's
> outgoing queue (for sending them).
> The number of threads is definitly larger then the number of processors.
> Do you have an idea where mit bootlenecks might be? I'm not sure.
> I'm not sure whether this is already a bottleneck or just
> close-to-a-bottleneck... what's your opinion?
> Kind regards
> Again, it depends on what the server actually does.
> Most HTTP servers are designed with a thread-per-client (or, more
> accurately, a process-per-client; but that's essentially the same thing)
> and easily handle thousands of simultaneous connections. Because an HTTP
> server simply accepts a connection, reads a request, grabs a file, and
> sends the content (possibly after processing -- see PHP, JSP, ASP, etc);
> the only interaction between threads/processes occurs when they read the
> same file, which doesn't actually require synchronization (reading data
> only requires synchronization when a write is happening to the same data,
> which is quite rare for an HTTP server). There is no synchronization
> bottleneck with an HTTP server.
> (Fun Fact: Many developers of HTTP and similar servers have picked on me
> for endorsing the event-driven alternatives, claiming its unnecessarily
> complex with no benefit -- which is probably true for HTTP servers)
> You should never need more than one thread per client.
> 1) Your use of a separate reader and writer thread for each client is a
> waste of resources (each thread necessarily has its own copy of all CPU
> registers, stored in memory when the thread isn't running; and also its own
> stack, which is typically at least 1Kb in size; and most modern Operating
> Systems also provide a feature to have thread-local copies of data, which
> if used (note: A modern C or C++ runtime will use this feature for its **and
> ** implementations; Glibc and Visual C++'s multithreaded runtime both do,
> at least) means additional memory needed by ALL threads regardless of
> whether or not the thread-local will ever be used on that thread. The
> result is several wasted MB of memory (if you have, say, 100 threads)!
> 2) read and write (recv and send) always interfere with eachother. Either
> they are inherently mutually exclusive (meaning the read thread will block
> the write thread and the other way around), or you run the risk of having a
> race condition, which means undefined (and, because of the nature of
> concurrent execution, unpredictable) behavior. I'm honestly not sure if any
> systems do synchronize sockets internally.
> 3) If you're already using non-blocking I/O and polling to see which is
> ready for reading, you might as well use a thread-per-processor (since SDL,
> and most other portable libraries, don't offer a way to detect the number
> of processors; you can typically assume 4, 8, or 16; depending on whether
> you'd rather be conservative, scalable, or balanced between the two)
> because you're already basically using the same mechanism in each
> individual thread anyway.
> So, not only is it a waste of resources, but it is also a pointless (and
> possibly destructive) endeavor.
> If you do want to stick with the thread-per-client model (I hardly blame
> you -- it is the greatest extent covered by most network programming
> knowledge bases and still used just about everywhere that isn't an MMORPG),
> and you have this queue for a worker thread, then you should implement
> the queue as a wait-free queue. Wait-free queues are highly scalable and
> would reduce the synchronization bottleneck concern. A general-purpose
> wait-free queue is described by Kogan and Petrank (see:
> http://www.cs.technion.ac.il/~sakogan/papers/ppopp11.pdf ), but a
> specialized wait-free queue for just this specific case (using boost's
> atomic library, designed after the C++11 atomics API) is described in the
> boost documentation (see:
> Nate Fries
> SDL mailing list
> SDL at lists.libsdl.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the SDL