Debugging code is a skill that only develops over time, and the debugging process often takes longer than writing the code itself- that is why planning, carefully considering test cases, and incremental development with frequent testing are all essential practises that you should try to get correct from the beginning.
There are some relatively simple techniques that can help with finding the source of a code bug. Some related and important questions are:
Have you modularised the code? Using functions that themselves can be easily tested
Have you documented the code- all functions have docstrings and well named variables that reduce any ambiguity about their purpose and what the code is doing.
Have you created test cases for the functions?
Have you tested regularly?
Have you the ability to revert to earlier versions of code that were working earlier? Git would be an entire additional module, but for a career in coding it's essential. Learn more here.
As your programs become increasingly sophisticated, you will encounter three general types of errors:
These are the first errors you will make and the easiest to fix. A syntax error means that you have violated the "grammar" rules of Python. Python does its best to point right at the line and character where it noticed it was confused. The only tricky bit of syntax errors is that sometimes the mistake that needs fixing is actually earlier in the program than where Python noticed it was confused. So the line and character that Python indicates in a syntax error may just be a starting point for your investigation.
A logic error is when your program has good syntax but there is a mistake in the order of the statements or perhaps a mistake in how the statements relate to one another. A good example of a logic error might be, "take a drink from your water bottle, put it in your backpack, walk to the library, and then put the top back on the bottle."
A semantic error is when your description of the steps to take is syntactically perfect and in the right order, but there is simply a mistake in the program. A semantic error occurs when a statement is syntactically valid, but does not do what the programmer intended.
Sometimes these will cause your program to crash, such as in the case of division by zero, where your code works perfectly but at some point the divisor has a value of zero, throwing up an error when this happens. Another possibility is that you have a return statement on your function, but all the work is done after that return statement.... none of that code will get executed since the function terminates upon reaching a return statement.
This is only useful for small programs or functions, but it does give you the ability to step through code one line at a time, and even more usefully it shows a visual representation of all of the variables that are "in play". It shows their type, their values, globals vs. local variables, indices and so on. This is a great tool to "see" some code execute.
pythontutor visualiser
After entering your code, you can step through code execution, and "see" visually all of the elements of the code being rendered on the right hand side of the screen. This will also show variable names, type and values, as well as any functions and which ones are "active".
Beneath the code visualizer in PythonTutor, the following use-cases are available:
Visualize Execution allows you to go to the next screen and step through code
Options allow you to either:
show all frames or hide exited frames- this means it can show "finished" function calls as well as active ones
"render all objects on the heap" will show vars and functions as boxes
draw pointers as arrows: the object "boxes" will have arrows pointing to the values
Generate permanent link: makes it easy to send a copy of your code to someone to help with debug.
Use print statements to monitor output
Use a divide and conquer approach- place the statement in the "middle" if that is a possibility, or print a "middle" calculation that may be part of a sequence of calculations, so you can find in which "half" the error is happening. This will typically get you to the location of the error 20 times faster.
Write better - and more - test cases
There is a complete walk-through of using VS Code to debug your code here. This is likely going to be the best way to learn as projects get larger and have more lines of code.
Testing a program against a well-chosen set of input tests gives the programmer confidence that the program is correct. During the testing process, the programmer observes input-output relationships, that is, the output that the program produces for each input test case. If the program produces the expected output and obeys the specification for each test case, then the program is successfully tested.
But if the output for one of the test cases is not the one expected, then the program is incorrect -- it contains errors (or defects, or "bugs"). In such situations, testing only reveals the presence of errors, but doesn't tell us what the errors are, or how the code needs to be fixed. In other words, testing reveals the effects (or symptoms) of errors, not the cause of errors. The programmer must then go through a debugging process, to identify the causes and fix the errors.
Surprisingly, the debugging process may take significantly more time than writing the code in the first place. A large amount (if not most) of the development of a piece of software goes into debugging and maintaining the code, rather than writing it.
Therefore, the best thing to do is to avoid the bug when you write the program in the first place! It is important to sit and think before you code: decide exactly what needs to be achieved, how you plan to accomplish that, design the high-level algorithm cleanly, convince yourself it is correct, decide what are the concrete data structures you plan to use, and what are the invariants you plan to maintain. All the effort spent in designing and thinking about the code before you write it will pay off later. The benefits are twofold. First, having a clean design will reduce the probability of defects in your program. Second, even if a bug shows up during testing, a clean design with clear invariants will make it much easier to track down and fix the bug.
It may be very tempting to write the program as fast as possible, leaving little or no time to think about it before. The programmer will be happy to see the program done in a short amount. But it's likely he will get frustrated shortly afterwards: without good thinking, the program will be complex and unclear, so maintenance and bug fixing will become an endless process.
Once the programmer starts coding, he should use defensive programming. This is similar to defensive driving, which means driving under worst-case scenarios (e.g, other drivers violating traffic laws, unexpected events or obstacles, etc). Similarly, defensive programming means developing code such that it works correctly under the worst-case scenarios from its environment. For instance, when writing a function, one should assume worst-case inputs to that function, i.e., inputs that are too large, too small, or inputs that violate some property, condition, or invariant; the code should deal with these cases, even if the programmer doesn't expect them to happen under normal circumstances.
Remember, the goal is not to become an expert at fixing bugs, but rather to get better at writing robust, (mostly) error-free programs in the first place. As a matter of attitude, programmers should not feel proud when they fix bugs, but rather embarrassed that their code had bugs. If there is a bug in the program, it is only because the programmer made mistakes.
Even after careful thought and defensive programming, a program may still have defects. Generally speaking, there are several kinds of errors one may run into:
Syntax or type errors. These are always caught by the compiler, and reported via error messages. Typically, an error message clearly indicates the cause of error; for instance, the line number, the incorrect piece of code, and an explanation. Such messages usually give enough information about where the problem is and what needs to be done. In addition, editors with syntax highlighting can give good indication about such errors even before compiling the program.
Typos and other simple errors that have pass undetected by the type-checker or the other checks in the compiler. Once these are identified, they can easily be fixed. Here are a few examples: missing parentheses, for instance writing x + y * z instead of (x + y) * z; typos, for instance case t of ... | x::tl => contains(x,t); passing parameters in incorrect order; or using the wrong element order in tuples.
Implementation errors. It may be the case that logic in the high-level algorithm of a program is correct, but some low-level, concrete data structures are being manipulated incorrectly, breaking some internal representation invariants. For instance, a program that maintains a sorted list as the underlying data structure may break the sorting invariant. Building separate ADTs to model each data abstraction can help in such cases: it can separate the logic in the algorithm from the manipulation of concrete structures; in this way, the problem is being isolated in the ADT. Calls to repOK() can further point out what parts of the ADT cause the error.
Logical errors. If the algorithm is logically flawed, the programmer must re-think the algorithm. Fixing such problems is more difficult, especially if the program fails on just a few corner cases. One has to closely examine the algorithm, and try to come up with an argument why the algorithm works. Trying to construct such an argument of correctness will probably reveal the problem. A clean design can help a lot figuring out and fixing such errors. In fact, in cases where the algorithm is too difficult to understand, it may be a good idea to redo the algorithm from scratch and aim for a cleaner formulation.
The debugging process usually consists of the following: examine the error symptoms, identify the cause, and finally fix the error. This process may be quite difficult and require a large amount of work, because of the following reasons:
The symptoms may not give clear indications about the cause. In particular, the cause and the symptom may be remote, either in space (i.e., in the program code), or in time (i.e., during the execution of the program), or both. Defensive programming can help reduce the distance between the cause and the effect of an error.
Symptoms may be difficult to reproduce. Replay is needed to better understand the problem. Being able to reproduce the same program execution is a standard obstacle in debugging concurrent programs. An error may show up only in one particular interleaving of statements from the parallel threads, and it may be almost impossible to reproduce that same, exact interleaving.
Errors may be correlated. Therefore, symptoms may change during debugging, after fixing some of the errors. The new symptoms need to be re-examined. The good part is that the same error may have multiple symptoms; in that case, fixing the error will eliminate all of them.
Fixing an error may introduce new errors. Statistics indicate that in many case fixing a bug introduces a new one! This is the result of trying to do quick hacks to fix the error, without understanding the overall design and the invariants that the program is supposed to maintain. Once again, a clean design and careful thinking can avoid many of these cases.
Although there is no precise procedure for fixing all bugs, there are a number of useful strategies that can reduce the debugging effort. A significant part (if not all) of this process is spent localizing the error, that is, figuring out the cause from its symptoms. Below are several useful strategies to help with this. Keep in mind that different techniques are better suited in different cases; there is no clear best method. It is good to have knowledge and experience with all of these approaches. Sometimes, a combination of one or more of these approaches will lead you to the error.
Incremental and bottom-up program development. One of the most effective ways to localize errors is to develop the program incrementally, and test it often, after adding each piece of code. It is highly likely that if there is an error, it occurs in the last piece of code that you wrote. With incremental program development, the last portion of code is small; the search for bugs is therefore limited to small code fragments. An added benefit is that small code increments will likely lead to few errors, so the programmer is not overwhelmed with long lists of errors.
Bottom-up development maximizes the benefits of incremental development. With bottom-up development, once a piece of code has been successfully tested, its behavior won't change when more code is incrementally added later. Existing code doesn't rely on the new parts being added, so if an error occurs, it must be in the newly added code (unless the old parts weren't tested well enough).
Instrument program to log information. Typically, print statements are inserted. Although the printed information is effective in some cases, it can also become difficult to inspect when the volume of logged information becomes huge. In those cases, automated scripts may be needed to sift through the data and report the relevant parts in a more compact format. Visualization tools can also help understanding the printed data. For instance, to debug a program that manipulates graphs, it may be useful to use a graph visualization tool (such as ATT's graphviz) and print information in the appropriate format (.dot files for graphviz).
Instrument program with assertions. Assertions check if the program indeed maintains the properties or invariants that your code relies on. Because the program stops as soon as it an assertion fails, it's likely that the point where the program stops is much closer to the cause, and is a good indicator of what the problem is. An example of assertion checking is the repOK() function that verifies if the representation invariant holds at function boundaries. Note that checking invariants or conditions is the basis of defensive programming. The difference is that the number of checks is usually increased during debugging for those parts of the program that are suspected to contain errors.
Use debuggers. If a debugger is available, it can replace the manual instrumentation using print statements or assertions. Setting breakpoints in the program, stepping into and over functions, watching program expressions, and inspecting the memory contents at selected points during the execution will give all the needed run-time information without generating large, hard-to-read log files.
Backtracking. One option is to start from the point where to problem occurred and go back through the code to see how that might have happened.
Binary search. The backtracking approach will fail if the error is far from the symptom. A better approach is to explore the code using a divide-and-conquer approach, to quickly pin down the bug. For example, starting from a large piece of code, place a check halfway through the code. If the error doesn't show up at that point, it means the bug occurs in the second half; otherwise, it is in the first half. Thus, the code that needs inspection has been reduced to half. Repeating the process a few times will quickly lead to the actual problem.
Problem simplification. A similar approach is to gradually eliminate portions of the code that are not relevant to the bug. For instance, if a function fun f() = (g();h();k()) yields an error, try eliminating the calls to g, h, and k successively (by commenting them out), to determine which is the erroneous one. Then simplify the code in the body of buggy function, and so on. Continuing this process, the code gets simpler and simpler. The bug will eventually become evident. A similar technique can be applied to simplify data rather than code. If the size of the input data is too large, repeatedly cut parts of it and check if the bug is still present. When the data set is small enough, the cause may be easier to understand.
A scientific method: form hypotheses. A related approach is as follows: inspect the test case results; form a hypothesis that is consistent with the observed data; and then design and run a simple test to refute the hypothesis. If the hypothesis has been refuted, derive another hypothesis and continue the process. In some sense, this is also a simplification process: it reduces the number of possible hypotheses at each step. But unlike the above simplification techniques, which are mostly mechanical, this process is driven by active thinking about an explanation. A good approach is to try to come with the simplest hypotheses and the simplest corresponding test cases.
Consider, for example, a function palindrome(s:string):bool, and suppose that palindrome("able was I ere I saw elba") returns an incorrect value of false. Here are several possible hypotheses for this failure. Maybe palindrome fails for inputs with spaces (test " "); maybe it fails for programs with upper case letters (try "I"); maybe it fails for inputs of odd length greater than one (try "ere"), and so on. Forming and testing these hypotheses one after another can lead the programmer to the source of the problem.
Bug clustering. If a large number of errors are being reported, it is useful to group them into classes of related bugs (or similar bugs), and examine only one bug from each class. The intuition is that bugs from each class have the same cause (or a similar cause). Therefore, fixing a bug with automatically fix all the other bugs from the same class (or will make it obvious how to fix them).
Error-detection tools. Such tools can help programmers quickly identify violations of certain classes of errors. For instance, tools that check safety properties can verify that file accesses in a program obey the open-read/write-close file sequence; that the code correctly manipulates locks; or that the program always accesses valid memory. Such tools are either dynamic (they instrument the program to find errors at run-time), or use static analysis (look for errors at compile-time).
For instance, Purify is a popular dynamic tool that instruments programs to identify memory errors, such as invalid accesses or memory leaks. Examples of static tools include ESC Java and Spec#, which use theorem proving approaches to check more general user specifications (pre and post-conditions, or invariants); or tools from a recent company Coverity that use dataflow analysis to detect violations of safety properties. Such tools can dramatically increase productivity, but checking is restricted to a particular domain or class of properties. There is also an associated learning curve, although that is usually low. Currently, there are relatively few such tools and this is more an (active) area of research.
A number of other strategies can be viewed as a matter of attitude about where to expect the errors:
The bug may not be where you expect it. It a large amount of time has unsuccessfully been spent inspecting a particular piece of code, the error may not be there. Keep an open mind and start questioning the other parts of the program.
Ask yourself where the bug is not. Sometimes, looking at the problem upside-down gives a different perspective. Often, trying to prove that the absence of a bug in a certain place actually reveals the bug in that place.
Explain to yourself or to somebody else why you believe there is no bug. Trying to articulate the problem can lead to the discovery of the bug.
Inspect input data, test harness. The test case or the test harness itself may be broken. One has to check these carefully, and make sure that the bug is in the actual program.
Make sure you have the right source code. One must ensure that the source code being debugged corresponds to the actual program being run, and that the correct libraries are linked. This usually requires a few simple checks; using makefiles and make programs (e.g., the Compilation Manager and .cm files) can reduce this to just typing a single command.
Take a break. If too much time is spent on a bug, the programmer becomes tired and debugging may become counterproductive. Take a break, clear your mind; after some rest, try to think about the problem from a different perspective.
All of the above are techniques for localizing errors. Once they have been identified, errors need to be corrected. In some cases, this is trivial (e.g., for typos and simple errors). Some other times, it may be fairly straightforward, but the change must ensure maintaining certain invariants. The programmer must think well about how the fix is going to affect the rest of the code, and make sure no additional problems are created by fixing the error. Of course, proper documentation of these invariants is needed. Finally, bugs that represent conceptual errors in an algorithm are the most difficult to fix. The programmer must re-think and fix the logic of the algorithm.