Candy Crush Is Complicated—Even from a Mathematical Point of View

Don’t be annoyed if you fail at a certain level of the popular game Candy Crush Saga; computers also have their problems with it

close up view of mobile device's display showing the game Candy Crush Saga

Have you wasted hours playing Candy Crush Saga? You’re not alone. Since its 2012 release, it has been one of the most popular games on Facebook and on mobile devices. The application was downloaded more than 106 million times in the first half of 2023, making it the second most downloaded game app during that period (after a game called Subway Surfers).

The principle of the game is simple: on a grid covered with variously colored candies, you try to form horizontal or vertical chains of at least three identical ones by swapping neighboring candies with each other. Once such a chain is formed, the candies in it fizzle out, and the others above them move down. Different levels of the game have different objectives. For example, one goal is to form at least a minimum number of chains using only a maximum number of swaps. In part because of its simplicity, the game enjoys great popularity. Perhaps it’s a bit too popular. Candy Crush has been criticized for having a high addiction potential—and maybe not just for its players.

“In some ways, at least for mathematicians, looking at Candy Crush as a math problem can be as addictive as playing it,” wrote computer scientist Toby Walsh of the University of New South Wales in Australia in an article for American Scientist. He was bitten by the Candy Crush bug back in 2014. But unlike most other fans, he didn’t try to master the game as best he could. Instead he wanted to understand how complex Candy Crush is from a mathematical perspective. In other words, how difficult is it for a computer to form a certain number of three-candy chains if the machine is given a set maximum number of candy swaps?


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


How Complex Can Games Get?

To sort mathematical tasks into different levels of difficulty, theoretical computer scientists have introduced the concept of so-called complexity classes. For example, there are computational problems where it’s unclear if a computer will ever arrive at an answer; it can keep calculating forever without arriving at one. These are, in a sense, the most difficult of all tasks. As it turns out, the card game Magic: The Gatheringbelongs to this category: game situations can arise in which it is no longer possible to decide which of the players will win (even under optimal conditions).

To determine a game’s complexity, you need to know whether a proposed solution can be quickly checked. For example, if you are presented with a completed Sudoku puzzle, you can confirm whether the solution is correct without much effort. If a computer’s computation time for checking the solution increases only polynomially with the size of the task, then the problem belongs to the nondeterministic polynomial time (NP) class—a category that describes the complexity of certain mathematical problems. This is also the case with Candy Crush: by tracing the various swaps that supposedly create candy chains step-by-step, you can quickly decide whether the displayed result (the number of candy chains that were annihilated) is true.

But the fact that a problem lies within NP says nothing about how hard or easy it is to solve it. This is because simple problems in the polynomial time (P) category, another complexity class, also lie within NP. For P problems, a computer’s computation time for solving a problem only grows polynomially with the size of the problem. At first, this sounds similar to the definition for NP—with the essential difference that here we are talking about the computation time for solving a task and not for checking its solution. So problems in P can be efficiently solved as well as checked. These “simple” problems include sorting a list, for example. In the worst case, the computation time grows with the square of the list size. So if the number of elements doubles, you have to wait four times as long to sort the list. While this sounds like a lot of time, it is not much of a problem from a computer scientist’s point of view. Therefore, tasks that lie in P are considered easy: they can usually be solved without too much computational effort.

In contrast, there are difficult problems in NP. The computational time required to solve them grows exponentially with the size of the problem. “On my desktop computer, I have a program that takes a few hours to find the optimal routing for 10 trucks and to demonstrate that this solution was the best possible. But for 100 trucks, the same program would take more than the lifetime of the universe,” Walsh explained in his article. Optimal routing is among the prime examples of “NP-hard” problems, the name given to tasks that are at least as hard to solve as the most complex problems from NP.

Given these definitions, it stands to reason that one almost always has to look at generalizations about a game to assess its complexity. After all, games such as chess, Go and even Candy Crush have a size determined by the available playing field. In such cases, theoretical computer scientists often study fictitious extensions of the games, where the board and the number of pieces or stones—or candies—can be arbitrarily large.

Walsh posed the question of which complexity class Candy Crush belongs to. Can a computer always find an efficient solution strategy for the game without much effort? If this were the case, Candy Crush would be in P. Or does a computer also struggle with finding the appropriate candies to swap? Walsh examined this question using a common computer science technique called “reduction.” To prove that a problem is NP-hard, one must show that all other problems in NP can be reduced to it. That is, problem A is NP-hard if a solution algorithm of A can also solve all other NP problems.

Walsh had a plan to approach this. Namely, there is a whole set of known problems that are in NP and are NP-hard. If he could show that one of them is akin to Candy Crush—that they could be mapped onto each other—he would prove the popular game is complex from a mathematical perspective. Walsh chose to compare the NP-hard “3-SAT problem” to Candy Crush.

3-SAT is a task from logic in which it is necessary to judge whether a linked series, or “concatenation,” of logical expressions is correct or leads to a contradiction. An example of such a concatenation is: x ∧ ¬x. At first glance, this looks complicated. But the statement can be translated quickly if you know that “∧” stands for “and at the same time” and that “¬” is a negation. Thus, the statement can be translated to: x and at the same time not x. The task now is to assign the value “true” or “false” to the variable x so that the overall statement becomes true (in other words, so that it does not generate a contradiction). In this example, that is impossible because you get either true and at the same time not true (for x = true) or false and at the same time not false (for x = false). Neither statement makes sense: something cannot be true and false at the same time. Therefore, this concatenation of expressions cannot be satisfied.

3-SAT problems involve chained statements that each directly link three variables in the form of (a1 ∨ b1 ∨ c1) ∧ (a2 ∨ b2 ∨ c2) ∧ ... ∧ (an ∨ bn ∨ cn). Here the symbol “∨” signifies “or.” A computer must attempt to assign truth values to the variables in order for the overall statement to be true. And as it turns out, this task is NP-hard. As the task length increases, the computation time required increases exponentially.

Walsh now needed to show that any 3-SAT problem can be represented in Candy Crush and that solving the Candy Crush problem automatically solves the associated 3-SAT problem. To do this, he matched configurations of candy in the Candy Crush game with variables and logical connections in the 3-SAT problem so that a certain pattern of candy represented a variable. In this way, he was able to prove that any logical statement in the form of 3-SAT can be represented by an appropriate distribution of candies in Candy Crush.

Here’s how it works: When a player makes a particular swap, Walsh interprets that as assigning a truth value to a variable—for example, “Variable 1 is false.” Each swap changes the playing field. In addition, it can create a three-candy chain that immediately disappears, allowing other candies to move down the board. If they have made as many swaps as the corresponding 3-SAT problem has variables, they can tell from the remaining candies on the board whether the associated 3-SAT statement is true or false. For example, if the square in the third row and second column contains a yellow lemon drop after all swaps, that corresponds to the statement “true.” Walsh has distributed the candy on the playing field in advance in such a way that the yellow lemon drop only lands in this field if the minimum number of three-candy chains has been formed—and the Candy Crush task has thus been successfully solved.

This is how Walsh was able to prove in March 2014 that Candy Crush is NP-hard and thus complicated from a mathematical perspective. That may be reassuring if you fail at another Candy Crush level. But the complexity also has its appeal. As Walsh wrote in American Scientist, “That trait may be part of what makes the game so addictive; if it were as easy to solve as tic-tac-toe, for instance, it wouldn’t be nearly as engaging.”

This article originally appeared in Spektrum der Wissenschaft and was reproduced with permission.