Thoughts and Opinions

"Big X"

A common thing often said around the community is that we have a "Big 4" of methods. These methods being Roux, CFOP, Petrus, and ZZ. Some instead say "Big 3" (Roux, CFOP, and ZZ) or even just "Big 2" (Roux and CFOP). What does it mean for a method to be in the "Big X" group? It seems there are different interpretations and not everyone agrees on how the phrase should be used. Does Big X refer to popularity, most influential, competitiveness, useful concepts, or the easy conclusion of "a combination of all"?

Popularity is almost certainly the main factor that is being used to place the current methods into the Big X group. It makes sense on the very surface. Many people are using it, so it must be good. However, popularity isn't an indicator of quality and if there are unpopular things of better quality, the popular thing shouldn't be so highly regarded as to be ranked at the top. Or, another way of saying it, popularity isn't a very respectable metric. In addition, how do we measure popularity? At what level of popularity and for how long does a method have to maintain that popularity level to be included in the Big X?

Competitiveness would be a good way of ranking methods - if we had some objective way of measuring that. Competition results aren't the way to go. Sure, CFOP has most of the records. But that doesn't show that it is the best, or even one of the best methods. Popularity breeds popularity. If Jessica Fridrich hadn't created a website in the late 90s and everyone instead learned Petrus from Lars' website, maybe the majority of the community would be using, optimizing, and breaking most of the records with Petrus. There are also many more methods which have been developed since the mid-2000s that aren't receiving attention and proper analysis due to the mentioned perpetual popularity effect.

Useful concepts is an interesting way that a few people use to determine the Big X. This means, for example, that we look at Petrus as teaching blockbuilding and EO. However, are these concepts being picked out based on the method and us wanting a method we like to be within the Big X? Kind of like confirmation bias where we search for the things which agree with what we want to be true. There are many methods which can teach concepts of their own. MI2 or SSC can teach DR for speedsolving. APB can teach algorithmic, yet intuitive, blockbuilding. 42 can teach working with complex pseudo. Nautilus and some other methods can teach option select. Blockbuilding and EO in Petrus are basic concepts that certainly would place it in the Big X if basing it on this one form of categorization, but that isn't enough to cover the wide range of possibilities. Possibilities which may be seen as groundbreaking and important in the future.

Influence may be the most logical way of determining what should go in Big X. We can look at CFOP and see how much it has been used in the community and how some recent methods have focused on TPS and making the solve be as algorithmic as possible. Petrus pretty much invented blockbuilding. Roux combined blockbuilding and refined corners first to create a method with a low movecount. ZZ, while not the first method to orient all edges at the beginning of the solve, popularized the EO first concept (though an argument can be made that it is an extension of Petrus EO). Even then, the importance of these can be debated.

If we are going by logic, respect, and quality, having influence be the primary factor that we use to create a Big X seems like a good start. Then everything else as secondary but important in narrowing down the classification. Once we can quantify factors such as competitiveness, the most important factor may change. An argument can of course be made for popularity being the reason a method is in the Big X because those are the methods which are being used and talked about the most. It is a very simple way to determine a list. But care should be taken so as to not create the wrong mix of factors and assume that popularity = quality.

Good enough isn't good enough

Often people say things like "Don't fix what ain't broken" or "Why change? That's the way we've always done it." Phrases like this are anti-progress. We should always be striving to do things better and not discourage others from trying. Good enough isn't good enough. This is a hobby which is a sort of mental sport and has a community of mathematicians, programmers, and other specialties which society views as intelligent. Yet this negative mentality exists even here. It's the old fear of change and fear of the unknown. Or fear that the way we are doing something isn't the best and our way of thinking could be proven wrong.

Some even say that it isn't possible that we will every find a better method than those that already exist. It is often the ones that claim that something is impossible are the ones that are proven wrong. Progress is slow, but we are gradually improving existing methods and creating new ones which have potential to be better than what came before. In the early years, the Roux method was seen by many as not competitive with CFOP. Yet Gilles Roux practiced and at one point became the unofficial 2nd fastest solver in the world. Then over time the Roux method gained users who have attained records such as the fastest unofficial average of 100 (two-handed and one-handed) and even world records. Instead of giving up and suggesting that others do the same, our focus should be on positive, forward thinking.

Hindsight

Sometimes when something new is discovered, or many years later, some say that the discovery is obvious. This is a result of the hindsight bias problem. To give an example, when I first presented the block referencing of NBRS, at least one person thought that it was obvious. That anyone could have accidently stumbled across the idea. The problem is that this way of thinking isn't considering why this discovery didn't exist before. Singmaster notation and other puzzle notation existed since the late 1970's when the cube was first created and solving guide books were sold. The NBRS block referencing notation was discovered in 2010. If it was so obvious, why was no one using it in those 30+ years before I discovered it? Often a discovery is so simple, right there in front of us, that it seems so obvious.

Another thing that happens is that years after something is discovered, the community grows in knowledge. Concepts that were once slightly complex are seen as simple and almost common sense. And so some become tempted to view a discovery as obvious and not as special and important as it really is. Thinking in this way discredits the amount of thought and work that the person or group put into the discovery.

Future of methods

A thought I often have is that maybe we are solving backwards. Or the opposite of what we could be. In our standard methods, we solve pieces in their correct orientations and or positions up to the end of the solve. We gradually place pieces to the point where we have restricted movement and or require memorized algorithms to get ourselves out of the small box in which we have placed ourselves. We are painting ourselves into a 3D corner instead of planning for ourselves an efficient path.

If we look at a computer optimal solve for a given scramble, it is solving the cube in what seems like 20 random moves. But one way of viewing the computer optimal solution, whether intentional or not, is that each turn is influencing pieces as it goes then bringing everything together at the end. Humans are currently taking over twice the number of moves to solve the cube as a computer does. The closer we approach those low movecount computer solutions, the worse the ergonomics become. But surely there is a way to do something similar to the global influencing embedded in the optimal solutions.

There is a concept that I discovered that I call Time Travel Solving. During inspection, pieces are examined to see how they will be affected by the steps or moves performed at the beginning of the solve. Then, before performing those initial moves, those pieces from later in the solve are influenced now to produce a better state. It is kind of like full-scale insertions. Could TTS, potentially applied not only just at the beginning but at any easy point in the solve, be used to produce solves closer to the efficiency and appearance of computer solutions but with better ergonomics? It seems very memory intensive on puzzles like 3x3 and larger. But maybe with the right memory techniques or a simplified version of the concept, we could achieve better solutions than what we are doing now.