Rolling your own cpu faker.[1]
Let us imagine a 3-component FBP system. Component A has one output pin and that pin is wired to the single input pin of Component B. Both components are nested inside Component P, which has no pins.
P is the “top level” parent component of this simple system.
When the system is started, the “kernel” creates (“instantiates” in OO parlance) Component P. P in turn creates (or asks the kernel to create) one A component and one B component and creates a wiring list (aka flow) that specifies the connection: “A’s output is connected to B’s input”.
Notice that, semantically, the wiring list belongs to the parent P. A can’t “see” B and B cannot see A.
After initialization[2] the kernel calls the top level component and tells it to run.
Component P calls every one[3] of its components and tells them to run.
Let’s say, for argument, that Component B is called first. It checks its input queue, finds nothing in it and simply returns (to the kernel).
Then Component A is called. It doesn’t have an input queue, but it does have code that generates one output. That output is bundled up into a simple data structure (IP, aka event) and enqueued on the appropriate output pin of A. A then returns control flow to its parent P.
P looks around for more work. It finds that there is an IP sitting on the output queue of A. P pulls the IP off of the queue, checks its wiring lists, decides where the IP is destined and queues it up on B’s input queue.
P again looks around for more work. It sees that B has something queued up on its input queue. P calls B. B pulls the IP off of its queue and processes it. Then it returns to P.
P finds no more work pending on any of its children, so P returns to the kernel.
The End.
That was easy, and no processes were spawned in the process[sic].
Too easy? No, the above “kernel” does the same things that a multi-processing kernel does – it treats apps as subroutines and calls them.
Finesse Points (asynch I/O, locking, prevention of recursion, etc.) will be discussed in subsequent articles.
[1] What I describe here is slightly different from FBP Classic, again for the sake of simplicity. I hope to return to FBP Classic in a future article.
[2] Separating start-up from steady-state running is important in a system more complicated than the one described here.
[3] Clearly, this can be optimized.