An animation of an 8-frame recognition trace in a 6-level, hexagonal topology Sparsey model. The model has 64 V1 macs, each viewing an input aperture of 4x4 pixels. There are 25 V2 macs, 9 V3 macs, 9 V4 macs, and 1 (V5) mac at the top. The individual cells cannot be seen at V1 or V2.  This network has a total of 14,558 cells and 6,806,262 U, H, and D weights. We show three versions of the movie. The first shows the SDCs turning on/off (though again, you can really only see the individual cells comprising the codes at levels L3 and higher). The second suppresses showing the CMs and cells to emphasize the patterns of mac activation. Macs are activated (red border and rose shading) if they have a sufficient amount of bottom-up (U) afferent activity. Black/red cells are correctly/incorrectly activated cells; light green are incorrectly non-activated cells. The third shows a small fraction of the U (blue), H (green), and D (magenta) weights activated as part of this memory trace (tens of thousands of synaptic transmissions underlie this trace).

In this simulation, the cell (and thus SDC) activation duration, or persistence, was 1 frame for V1 codes, 2 frames for V2, and 4 frames for V3-V5 (this is somehwat of a departure from our typical practice; we usually increase persistence with each level, but we are playing with all sorts of model parameters). In another departure, we imposed delayed activation for levels L4 and L5 in this simulation: L4 could not activate any sooner than the second frame of a sequence, and L5, no sooner than the third. We are experimenting with this constraint because it will, in general, force the first L4 and L5 codes activated during a sequence to depend on some temporal context, which should result in more unique higher level codes, less cross-talk between codes, and thus, higher storage capacity and recognition accuracy. The 8-frame 32x32 pixel input snippet was extracted from a pre-processed (edge-filtered, binarized, skeletonized) snippet from the viHASi database of synthetically generated humanoid actors performing typical actions, e.g., kicking, jumping, etc. See the detailed recognition trace showing all SDCs (and their comprising cells) active across all macs on all frames, as well as other statistics, for further information. Note that a cell that is correctly activated appears black on the first frame of its activation, but then goes to black/gray on the remaining frames of its persistence.

...and here is the trace showing only which macs get activated when during this memory trace.

This shows a small fraction of the U, H, and D signals between and within levels and the progressive updating of codes from V1 to V5 on each frame processed. It goes too quickly to see/understand the causal basis for the rapidly changing neural codes, i.e., the sparse distribued codes (SDCs). What's really happening here is that the Code Selection Algorithm (CSA) [See Rinkus (2014) for description] runs in every mac [having sufficient bottom-up (U) input] at every level and on every frame. The CSA combines the U, H, and D signals arriving at the mac, computes the overall spatiotemporal familarity of the spatiotemporal moment represented by that total input, and in so doing effectively retrieves the the spatiotemporally closest-matching stored moment in the mac. The computationally hugely important advantage of Sparsey is that because all moments stored in a mac are stored as SDCs in superposition, the time it takes to retrieve the closest-matching code or to store a new moment in adherence to the "similar imputs map to similar codes" (SISC) property remains constant for the life of the system. Models that use localist representations of stored entities (e.g., objects, inputs, concepts, moments, events) CANNOT achieve this speed of inference/retrieval.