0
0
Fork 0
mirror of https://github.com/matrix-construct/construct synced 2024-12-28 00:14:07 +01:00
construct/include/ircd/ctx
2018-04-19 21:42:54 -07:00
..
async.h
context.h
continuation.h ircd::ctx: Integrate custom interruption action. 2018-03-23 22:17:10 -07:00
ctx.h ircd::ctx: Give ctx::ctx the instance_list. 2018-04-19 21:42:54 -07:00
dock.h ircd::ctx: Remove the cv_status enum. 2018-03-26 23:29:58 -07:00
fault.h
future.h ircd::ctx: Eliminate the std future_status and simplify our real-use interfaces. 2018-04-05 22:16:32 -07:00
list.h ircd::ctx: Place ctx::list node pointers into structure in ctx. 2018-04-19 21:42:54 -07:00
mutex.h ircd::ctx: Add functions to peek at the queue size of the mutexes. 2018-03-06 01:09:37 -08:00
ole.h
peek.h
pool.h ircd: Various exception assertions; minor cleanup. 2018-03-15 22:25:16 -07:00
prof.h ircd::ctx: Relax noinline attribute on prof::stack_usage_here() wrapper. 2018-02-20 15:40:40 -08:00
promise.h ircd::ctx: Cleanup/improve the shared_state states. 2018-03-15 11:59:25 -07:00
queue.h ircd::ctx: Remove the cv_status enum. 2018-03-26 23:29:58 -07:00
README.md
shared_mutex.h ircd::ctx: Add share counter observer to shared_mutex. 2018-03-26 23:29:58 -07:00
shared_state.h ircd::ctx: yield the promise-notifying ctx until any then() has posted. 2018-04-07 05:00:55 -07:00
this_ctx.h ircd::ctx: Avoid any possible ambiguities with location of *current. 2018-03-26 23:29:57 -07:00
unlock_guard.h ircd::ctx: Move unlock_guard from util:: to ctx::. 2018-03-28 22:17:09 -07:00
view.h ircd::ctx: Candidate fixes for view sequencing. 2018-03-27 21:44:01 -07:00
when.h ircd::ctx: Fix bitrot in when_all() template. 2018-04-10 20:09:17 -07:00

Userspace Context Switching

The ircd::ctx subsystem is a userspace threading library meant to regress the asynchronous callback pattern back to synchronous suspensions. This is essentially a full elaboration of a setjmp() / longjmp() between independent stacks, but justified with modern techniques and comprehensive integration throughout IRCd.

Foundation

This library is based in boost::coroutine / boost::context which wraps the register save/restores in a cross-platform way in addition to providing properly mmap(NOEXEC)'ed etc memory appropriate for stacks on each platform.

boost::asio has then added its own comprehensive integration with the above libraries eliminating the need for us to worry about a lot of boilerplate to de-async the asio networking calls. See: boost::asio::spawn.

This is a nice boost, but that's as far as it goes. The rest is on us here to actually make a threading library.

Interface

We mimic the standard library std::thread suite as much as possible (which mimics the boost::thread library) and offer alternative threading primitives for these userspace contexts rather than those for operating system threads in std:: such as ctx::mutex and ctx::condition_variable and ctx::future among others.

  • The primary user object is ircd::context (or ircd::ctx::context) which has an std::thread interface.

Context Switching

A context switch has the overhead of a heavy function call -- a function with a bunch of arguments (i.e the registers being saved and restored). We consider this fast and our philosophy is to not think about the context switch itself as a bad thing to be avoided for its own sake.

This system is also fully integrated both with the IRCd core boost::asio::io_service event loop and networking systems. There are actually several types of context switches going on here built on two primitives:

  • Direct jump: This is the fastest switch. Context A can yield to context B directly if A knows about B and if it knows that B is in a state ready to resume from a direct jump and that A will also be further resumed somehow. This is not always suitable in practice so other techniques may be used instead.

  • Queued wakeup: This is the common default and safe switch. This is where the context system integrates with the boost::asio::io_service event loop. The execution of a "slice" as we'll call a yield-to-yield run of non-stop computation is analogous to a function posted to the io_service in the asynchronous pattern. Context A can enqueue context B if it knows about B and then choose whether to yield or not to yield. In any case the io_service queue will simply continue to the next task which isn't guaranteed to be B.