0
0
Fork 0
mirror of https://github.com/matrix-construct/construct synced 2024-12-29 08:54:02 +01:00
construct/include/ircd/ctx
2018-01-17 03:33:08 -08:00
..
async.h ircd::ctx: Update async() with moveable promise. 2018-01-08 12:39:05 -08:00
context.h ircd::ctx: Adjust prof defaults; minor cleanup. 2017-11-30 11:23:43 -08:00
continuation.h ircd: Update various comments / documentation. 2017-12-12 14:59:40 -07:00
ctx.h ircd::ctx: Add custom intrinsic list structure for contexts. 2018-01-13 23:45:36 -08:00
dock.h ircd::ctx: Enforce semantics in mutex / shared_mutex; assertion related. 2018-01-17 03:33:08 -08:00
fault.h ircd::ctx: Updates; add ctx::view; add preliminary ctx::fault; various reorg. 2017-09-24 18:16:38 -07:00
future.h ircd::ctx: Add use_future_t; minor cleanup. 2018-01-08 12:40:09 -08:00
list.h ircd::ctx: Enforce semantics in mutex / shared_mutex; assertion related. 2018-01-17 03:33:08 -08:00
mutex.h ircd::ctx: Enforce semantics in mutex / shared_mutex; assertion related. 2018-01-17 03:33:08 -08:00
ole.h
peek.h ircd::ctx: shared_mutex; peek / view / shared_view; fixes. 2017-10-03 04:17:10 -07:00
pool.h ircd::ctx: Improve the pool counters. 2017-11-30 11:23:46 -08:00
prof.h
promise.h ircd::ctx: Validate all state access in ctx::promise. 2018-01-17 03:33:08 -08:00
queue.h ircd: Rename ircd::scope to ircd::unwind. 2017-09-24 18:16:41 -07:00
README.md ircd::ctx: Add preliminary README. 2017-12-24 20:25:40 -07:00
shared_mutex.h ircd::ctx: Enforce semantics in mutex / shared_mutex; assertion related. 2018-01-17 03:33:08 -08:00
shared_state.h ircd::ctx: Add custom refcnt to allow copyable promise. 2017-12-29 15:32:08 -07:00
shared_view.h ircd::ctx: shared_mutex; peek / view / shared_view; fixes. 2017-10-03 04:17:10 -07:00
view.h ircd::ctx: shared_mutex; peek / view / shared_view; fixes. 2017-10-03 04:17:10 -07:00

Userspace Context Switching

The ircd::ctx subsystem is a userspace threading library meant to regress the asynchronous callback pattern back to synchronous suspensions. This is essentially a full elaboration of a setjmp() / longjmp() between independent stacks, but justified with modern techniques and comprehensive integration throughout IRCd.

Foundation

This library is based in boost::coroutine / boost::context which wraps the register save/restores in a cross-platform way in addition to providing properly mmap(NOEXEC)'ed etc memory appropriate for stacks on each platform.

boost::asio has then added its own comprehensive integration with the above libraries eliminating the need for us to worry about a lot of boilerplate to de-async the asio networking calls. See: boost::asio::spawn.

This is a nice boost, but that's as far as it goes. The rest is on us here to actually make a threading library.

Interface

We mimic the standard library std::thread suite as much as possible (which mimics the boost::thread library) and offer alternative threading primitives for these userspace contexts rather than those for operating system threads in std:: such as ctx::mutex and ctx::condition_variable and ctx::future among others.

  • The primary user object is ircd::context (or ircd::ctx::context) which has an std::thread interface.

Context Switching

A context switch has the overhead of a heavy function call -- a function with a bunch of arguments (i.e the registers being saved and restored). We consider this fast and our philosophy is to not think about the context switch itself as a bad thing to be avoided for its own sake.

This system is also fully integrated both with the IRCd core boost::asio::io_service event loop and networking systems. There are actually several types of context switches going on here built on two primitives:

  • Direct jump: This is the fastest switch. Context A can yield to context B directly if A knows about B and if it knows that B is in a state ready to resume from a direct jump and that A will also be further resumed somehow. This is not always suitable in practice so other techniques may be used instead.

  • Queued wakeup: This is the common default and safe switch. This is where the context system integrates with the boost::asio::io_service event loop. The execution of a "slice" as we'll call a yield-to-yield run of non-stop computation is analogous to a function posted to the io_service in the asynchronous pattern. Context A can enqueue context B if it knows about B and then choose whether to yield or not to yield. In any case the io_service queue will simply continue to the next task which isn't guaranteed to be B.