In most programs interrupt service routines (ISR) are assumed to run only a very short piece of code, and exit early, as they prevent other parts of code to run.
So, for example in interrupts from communication peripherals such as I2C or UART, the reason for interrupt is to be established; if error, it has to be cleared or dealt with in proper way, perhaps setting a flag or error/status variable for the main program to be handled later; if transmit buffer empty, the software FIFO/buffer has to be checked, if nonempty, data have to be picked from the software FIFO/buffer and transmitted, otherwise transmitter has to be disabled; if receiving buffer nonempty, the received byte should be read out and stored into the software FIFO/buffer, for "main" program to deal with these data later.
This all should not take more than a couple dozens of instructions/machine cycles. In STM32 running at tens of MHz, this means, execution of a typical ISR should last in the order of ~10μs or less.
Some novices, especially having been trained on PC rather than microcontrollers, tend to insert debugging printouts (usually using
printf() or its variants,
often encouraged by the "semihosting" feature offered by IDEs) into various parts of code. This is usually harmless in "main" part of code; however, in ISR, it's often problematic.
The vast majority of
printf() implementations are based on UART, and are blocking, i.e. they wait until the "printed" characters are transmitted.
Transmitting one character at 115200 Baud takes around 86μs, so transmitting a typical message may easily make the ISR last longer by two orders of magnitude than normally. As a result, ISR may fail to keep up with any external or internal stimulus; in extreme cases it may take up all the processing time, preventing "main" to run normally.
Yes, there may be non-blocking implementations of printf() and in some applications lengthy ISR are not problematic - but these are rare.
It's better to stick to the safe method - short ISRs, with no