Select Page

## Debugging

Debugging is typically what programmers do about 90% of the time.

This is a sad but not unrealistic fact of life. Given that fact, the creators of R have generously provided useful debugging tools to make programmers’ lives a little easier.

The debugging tools should be used as much as necessary to minimize the time spent debugging and to maximize the time spent, as John Chambers wrote, “turning ideas into software”.

However, it is all to easy for a programmer to develop an unhealthy relationship with his debugger. This is to be avoided. The debugger should not replace common sense in programming and careful design.

## Poor man’s debugging

Poor man’s debugging doesn’t require the knowledge of debug functions. Poor man’s debugging is probably the simplest of the debugging methods, which often makes it the most effective!

It is an extremely easily implemented method which can tell you exactly where something went wrong.

It is based on print() and cat() functions. Differences between print() and cat() are described at the end of this section. This kind of debugging is done by simply printing some text between the lines, including the variables you are currently working with. For this reason, it is also called Debug by Print.

As a simple example, consider this basic function:

This function works when x is numeric:

The function works as well when x is a factor:

The same function stops working when option stringsAsFactors is set to FALSE.

As a very simple debugging procedure we could insert a cat command within the function body:

We could easily observe that x is a character vector and, as a consequence, it cannot be used as a plotting variable.

When using iterations, the use of markers may result in a very efficient debugging tool. rmean() is a trivial example illustrating the use of markers within a for loop.

This example fully illustrates how to take advantage of this debugging tecnique. In this instance, a marker Loop is used to indicate that the print out is within the loop and that s is not finite. This allows to quickly reference which part of the code is being processed.

The values of the variables that are being worked within the loop inside rmean() are returned. Now, if there is a simple flaw in our logic or something wrong was happening, this output makes most problems easy to track down.

In other words, when a function doesn’t work may be useful to insert a print() or a cat() in several points of the function to detect where it fails.

## stop() and warning()

Functions stop() and warning() provide basic tools for exception handling. stop() stops execution of the current expression and executes an error action while warning() generates a warning message that corresponds to its argument. An extensive use of these functions during code development makes debugging much shorter.

## Trying a function

The function try() is a wrapper to run an expression that might fail and allows the user’s code to handle error-recovery. It is useful to avoid that an error stops the execution of the whole function, as below. Again, good use of try() when developing avoid extensive debugging session.

When function doit() is called directly, it returns an error and object thisError is not created. When using try() as a wrapper on doit(), despite the function returns error, object thisTry is created. try() returns the object returned by the called funcion if the called function does not go in error, otherwise, try() returns an object of class “try-error”.

## Debugging in R

There are a few basic tools in R that are worth knowing about:

• browser() interrupts the execution of an expression or within an R function. Objects can be viewed and changed during execution. Support exists for setting conditional breakpoints.
• debug() marks a function for debugging, so that browser() will be called on entry.
• trace() modifies a function to allow debug code to be temporarily inserted.

### Browsing

Function browser() is usually called from within an other function. When the evaluator encounters this function it opens a browsing session on the evaluated frame so that the active environment can be explored.

All R commands can be used within the evaluated frame. Moreover, the interpreter, sets up special functions to be called uniquely within the browser() environment:

• n advances to the next step;
• c continues to the end of the current section of code (e.g., to the end of a for loop, or the end of the function itself);
• where prints a stack trace of all active function calls;
• Q exits the browser and the current evaluation and return to the top-level prompt.

As an example, consider the function below. It ouputs the sum of a given parameter plus two random numbers. In order to see the value of the two random numbers: rn and ru prior summing them up, the function evaluated frame is inspected by inserting a call to function browser within the function body.

The use of browser() finds its best application when called by function debug().

### Post Mortem Debugging

Classical debugging is named Post Mortem as the error function is debugged after the evaluator returns a critical error so that the execution of the function is interrupted. Function debug() is the most used debugging tool.

Function debug() allows stepping through the execution of a function, line by line. At any step a browser() is opened on the evaluation frame.

As an example, consider this simple function:

This function simply prints “Hello” or “Goodbye” depending on whether x is greater than or less than 0. In principle, this function should never fail; nevertheless:

this function returns an error and the execution is halted. As a first step, a call to traceback() is always worth:

In this case, traceback() does not help a lot. It simply says that the error occurs within the call to msg(). Not a big surprise as, msg() was the only function called.

The use of functions debug() or debugonce() returns interesting results:

As it can be observed, log(-1) produces a simple warning and returns NaN but, passing the resulting value of x into msg() ended in a fatal error as the comparison between NaN and 0 is clearly undefined and returns NA that, in tur, returns an error like the following:

Finally, debugonce() is clearly used to debug a function at its next call. When using debug(), the function passed as an argument is set into “debug mode” and it is kept in the same mode until undebug() is called on the same function. Function isdebugged() is used to query the debugging flag on a function.

In more complex situation, the use of traceback() after a call stack may result into a very useful debugging tool. Suppose we define functions f(), g() and h() as:

when calling the outer function f():

The error seems to be generated within function f(). And this is partially true as the error is actually generated within function h() as f() calls g() that calls h(). All this mechanism is not always transparent to the end user as the inner mechanism of a function is not necessarily known. A simple call to traceback may quickly put the truth into evidence:

The output of traceback() is to be read bottom-up or by following the output row number. In this case, traceback() says that the error occurs within function h(). As a result, function h() can be put into debug mode and f(-1) be called again:

The error clearly occurs in function h() as NA within an if statement returns error.

In case we simply put f() directly into debug mode instead of running traceback() first, the result would be useless:

### Debugging on error

When the evaluator encounters an error, it looks for an error action. The error action, a function or an expression to be evaluated, is determined by the error argument of the options, by default set to NULL:

By setting:

the evaluator stops whenever an error occurs and calls function recover() which, in turn, offers a selection of available frames for browsing. By selecting 0, recovering is exited and the normal prompt is returned.

Debugging on error is clearly a useful techniques when running R in interactive mode. When calling R in BATCH mode, interactive debugging is of no use. In this case a call to dump.frames() provides the right solutions as the evaluated frame is dumped into a file, last.dump by default, to be lately inspected by function debugger(). Again, dump.frame() is set as the error argument to function options().

The error can now be inspected, as in interactive mode. The whole working environment has been dumped to file fdump that can be loaded, at any time, by function debugger():

Finally, during development, setting options("error") as:

could help to take advantage of both solutions.

## Profiling

Profiling R code helps to identify bottlenecks and pieces of code that needs are not efficiently implemented.

Profiling our code is usually the last thing we do when developing functions and packages. With serious profiling, the amount of time required can be drastically reduced with very simple changes to our code.

Benchmarking R code is about comparing performances of different solutions to the same problem in terms of execution time.

Both techniques aim to reduce computational time.

### Rprof

As a main tool for profiling R code we make use of function Rprof().

As a basic example to demonstrate Rprof() capabilities we consider a trivial function f1() written a inefficient for() loop fashion:

When running f1() on a large vector x it may require a significant amount of computing time:

In order to understand if any bottle neck exists within f1() we may want to profile function f1() by using function Rprof().

Function Rprof() is used to control profiling. Profiling works by recording at fixed intervals, by default every 20 msecs, which line in which R function is being used and records the results in a file passed as argument to Rprof(). Function summaryRprof() can then be used to summarize the activity.

We can observe that function c() takes 25.52 seconds corresponding to 96.67 per cent of the total execution time. This is because of the wrong usage we are making of function c() within f1() when computing y <- c(y, tmp). At each iteration R is forced to copy vector y in a larger memory space to allocate the longer vector.

We can avoid this multiple copying of y by simply defining in advanced the length of vector y:

and, if we run Rprof() again:

we can observe that the execution time is now down to 0.74 seconds and that rnorm() is taking a sensible amount or time.

In case we want to evaluate the gain in efficiency by using lapply() instead of a for() loop we could rewrite f2() as:

and profile this functions as usual by:

Note that in this case, function f2() takes 0.74 seconds while function f3() requires 0 seconds.

We can make full use of R vectorized capabilities and write:

We can now profile this functions as usual by:

We see can see no specific bottle necks and a very fast function:

Finally, as a whole, function summaryRprof() returns a list of four components:

Table $by.self lists the time spent by functions alone, while the table $by.total lists the time spent by functions and all the functions they call. In the f2() simple case, function f2() itself just takes 0.22 seconds to do things like:

• initialize its own execution environment
• filling it up with promises arguments
• starting the execution of its body

while the same function remains active until the end of the execution: 0.74 seconds. The lines marked with .External refer to calls to external C or Fortran code.

#### Rprof with memory.profiling

Profiling can be used to gain information about timing as well as memory. In order to gain memory usage we need to call Rprof() with memory.profiling enabled and calling summaryRprof() with memory = "both":

In this case a column indicating the memory, expressed in Mb, as used by each function, is added to the previous output.

Note that memory.profiling requires R to be compiled with --enable-memory-profiling. This option should be enabled by default under Windows and Mac-OS but not on all Linux distributions.

## Benchmarking

Function Rprof() represent a great tool when investigating for bottle necks within functions in order to remove them and speed up our code.

At the same time we may want to compare different solutions to understand in order to understand if any of them may provide any real advantage over the others.

### rbenchmark

Library rbenchmark is intended to facilitate benchmarking of arbitrary R code.

The library consists of just one function, benchmark(), which is a simple wrapper around system.time().

Given a specification of the benchmarking process: counts of replications, evaluation environment and an arbitrary number of expressions, benchmark evaluates each of the expressions in the specified environment, replicating the evaluation as many times as specified, and returning the results conveniently wrapped into a data frame.

As a first case we can compare the differece in performaces between sapply() and vapply() when applied to a simple mathematical function:

and observe the gain in performances of vapply() compared with sapply()

As a second example, we can consider the calculation of the 95% quantile of the distribution of the correlation coefficient between two N(0,1) vectors of given sizes n=10.

As a first case we define a function f_loop() by using a simple for loop over k iterations.

As a second case we develop the same function using a functional approach based on replicate()

As a third case we introduce a function matrix() within replicate():

Once the three functions have been defined we can compare them by using rbenchmark()

We can see that all functions, despite the different programming styles, have very similar performances.

### microbenchmark

In some cases we may want to benchmark two or more solutions that require a very small amount of time to perform but are very often used in our code.

As as an example, we may want to compare [i] and [[i]] when applied to atomic objects with i being a single integer.

First, let’s notice that x[i] and x[[i]] return the same result when i is a single integer:

In principle, in order to evaluate performances of these functions, we could use:

but a better choice is represented by library microbenchmark:

Note that timing is reported in nonoseconds and that the output is made of several statistics.

Moreover, we can use the graphics facilities provided by library ggplot2 to compare the two distributions: