Strictly Long

On compression, if the zero path is followed except when the one path path is strictly long. This ensures when a check to see if the one decode path is strictly long, that can be taken instead of the otherwise compulsory zero path. Equal path lengths generate zero paths, A long zero is zero. A long one which is not strictly long is zero path taken. A strictly long one is always taken as one.

The last path solve taken can be modulated between the zero and one states via a coding system such as Jaxon Modulation as the paths taken are by design taken with different probabilities, and the two face states at the end of a solve path are inverses of each other, and both connected to a third face state common for both solves to path.

That was the vagary before the light. Which follows in more clarity.

function decompress() {
    if(Rev0StrictShort) { 
        RevTravel0();
        if(For1StrictLong) { emit(0); ForTravel0(); return; }
        ForTravel0();
        if(Rev1StrictLong) { 
            RevTravel1();
            if(!For0StrictShort) { emit(1); return; }
            else { ForTravel1(); RevTravel0(); return; }
    } else { RevTravel0(); return; }
}   

and

function compress() {
    if(For1StrictLong) { 
        if(For0StrictShort) {
            if(absorb()) { ForTravel1(); return;}
            else { ForTravel0(); return; }
        } else {
            ForTravel1(); return;
        }
    } else { ForTravel0(); return; }
}   

"This then has the advantage of turning unique decode conditions into bit absorption by making an infrequent state even less frequent, so boosting the information stored in frequency of occurrence."

Things that make you go umm. This is a massive benefit over needing Jaxon Modulation, as infrequently one bit is absorbed in compression, while emission of bits only occurs in decompression.

The solve direction to count the path length (move count) does not matter as long as the same (for(ward) or rev(erse)) is used through out to fix the solve algorithm as deterministic. Both compression and decompression use the same solve algorithm direction. The forward and reverse are for the pseudo random generator and travel. The length of travel is calculated in one direction only, even though travel is in both directions. This is similar to the way alternative strands of DNA use two different methods of polymerase replication. Joined replication is always in one direction and the other strand is made in larger parts and then joined into one.

"The idea of strict long is the end state of a compression solve has two possible towards nodes from forward from two differing states. Both of these must be the long ones of from the respective previous states. The opposite is true for a strict short."

That in turn means more solves per bit, but it does mean that it becomes a unique decode, as it becomes impossible to reverse into a wrong decode. I must emphasize again that all solving takes place in one direction, and this is not the same as the dual directions that the states between solves move.

Or more precisely the initial and final state of any solve are strictly always up or down countable when in the one process of either compression or decompression, but opposite for each process, while the solve always progresses in the one direction. This is because an anti solve may take a differing number of steps.

An example of an inversion to create the one end state of a solve from the corresponding zero end state is to rotate a specific set of eight rotations. It does not have to be a squared identity self inverse, as it is known which of the end states is being considered.

As a reversible PRBS (generated by a totient public key system) ensures the start can be regenerated from the end, and the one path can be generated from the zero path (maybe a second iteration of the PRBS), unique decode follows from the strict conditions forcing the nodes above and below the start node being short on their 
one path and long on their zero path. All other path length situations are dealt with reversible determinism.

Decompression of one cycle can take upto 8 cube solves, and on average only 1 in 8 cycles stores a bit. This puts the decode in the 64 solves per bit range, and the encode in the 48 solves per bit range. As the cube solve is the most intensive part (even though cache size is not an issue given the face state size), there is a further requirement to develop Beyond the Cube and consider less intensive mathematical group systems.

It is possible that the state of the PRBS can contain a specific bit for the forward long short orientation. This is possible due to the relative size being the metric of interest, and not the actual solution move count. The process then becomes extract the orientations from the start column, assign cycle end PBRS based on encoder logic, use this PRBS to find new start column orientations. This involves a number of totient modulo operations, and the size of the PRBS seed can be made small.

One final thought on the all zero PRBS value is to use a simple check, a reseed and save these locations at the end of compression with a count. This then limits the minimum size of the PRBS in bits, as to be not likely in the remotest, and definitely less often than a reasonable number of cycles on the longest term average. Say around 32 bits, but could be any cyper text (last file ordered modulo block used). Of course a maximal length linear congruential PRBS with reversibility could be used to avoid this need, but it would miss the public key encrypt as you compress option.

A final zero bit would of course need to be compressed to make all end states be in the zero path, and if this combines with a terminal all zero seed (or all one seed for the other exception, handled the same), this is fine. The all zero exception counts are counted from the start or last exception in 64 bits, and a chain, where the next reseeded block is started with the count of bits in the last seeded block. This makes for an easy reseed by constant addition of 3 times the reseeded block number. The final count is compressed before the terminal zero. This is all done in a nested fashion so that if a reseed happens while storing a count from a previous reseed, then the new count is compressed before the old count is completely compressed. The final terminal zero is last most.

So apart from a public decrypt key, the standard just needs bit 217 defined as the radix bit indicating long short orientation. Its polarity is the long polarity. Any key pair is now suitable for compression, and the block number becomes part of the compressed output along with the seed (written after it, big endian). Decompression works until block zero is reached. This implies block numbers start from 1. I specify 32 bits for the block number.

I'm sure that's enough of a specification for now. Apart from compress from the last bit of the file to the first bit (big-endian), as this is LIFO. Enjoy!

A more nice standard is developing on The Standard page. Manly to use ssh keys. These seem to be the only ones I ever needed to make for real use, and not just to experiment.
Comments