For those who use arduino before must know the function micros() returns system time in microseconds. AVR based arduinos use TIMER0 in MCU(such as ATMEGA328) to track system time. For cortex arm M0/M3/M4 processor, the most common way is to use systick to count system time. Typically people need system time in milli second level, so all they do is to set
volatile u32 Millis;
if (SysTick_Config (SystemCoreClock / 1000)) //1ms per interrupt
while (1);
and accumulate Millis every milli second in systick handler
void SysTick_Handler(void)
Millis++;
if you need system time in micro second, one way to do is let systick generates interrupt every 1us, accumulate the count for system time in micro second. the code will like this:
volatile u32 Millis;
volatile u32 Micros;
if (SysTick_Config (SystemCoreClock / 1000000)) //1us per interrupt
while (1);
and accumulate Millis every milli second in systick handler
void SysTick_Handler(void)
{Micros++; Millis %= Micros}
However, this is not a good way to achieve your goal since you need to do Micros++ and Millis%=Micros every 72 cycles if your STM32 runs at 72Mhz, it takes too much system source even though the computation won’t take that much time every 72 cycles.
After all, I found a way smarter way to count system time in micro second without sacrifice too much system performance.
It’s really easy to achieve this, before I show my code, let’s do a refreshment of how systick works.
Systick is a 24bit counting down timer in arm processor, you load a value for counter to count down at beginning, and the count amount decrease by 1 every one system cycle, when the count reaches 0, it will automatically reload and generate a systick interrupt. the counting down process doesn’t consume any system resources(tho it consumes power : D). In my new solution, I use systick to generate interrupt every 1 ms, accumulate the count for Millis only, and when I need system time in micro second, I get the value from register in systick called SYSTICK->VAL to get the current counter value to convert to the system I want in micro second. The advantage it doesn’t take any system resource when we don’t need system time in micros. Here is the code
void Systick_Init(void)
{
if (SysTick_Config (SystemCoreClock / 1000)) //1ms per interrupt
while (1);
//set systick interrupt priority
NVIC_PriorityGroupConfig(NVIC_PriorityGroup_4); //4 bits for preemp priority 0 bit for sub priority
NVIC_SetPriority(SysTick_IRQn, 0);//i want to make sure systick has highest priority amount all other interrupts
Millis = 0;//reset Millis
}
u32 micros(void)
{
Micros = Millis*1000 + 1000 – SysTick->VAL/72;
// = Millis*1000+(SystemCoreClock/1000-SysTick->VAL)/72;
return Micros;
}
u32 millis(void)
{
return Millis;
}
void delay_ms(u32 nTime)
{
u32 curTime = Millis;
while((nTime-(Millis-curTime)) > 0);
}
void delay_us(u32 nTime)
{
u32 curTime = Micros;
while((nTime-(Micros-curTime)) > 0);
}
and don’t forget to accumulate Millis in systick_handler
void SysTick_Handler(void)
Millis++;
in the function micros()
we have this code:
Micros = Millis*1000 + 1000 – SysTick->VAL/72
return Micros;
the equation was originally Millis*1000+(SystemCoreClock/1000-SysTick->VAL)/72;
however, I found it takes a lot cycles, and it only take 25 cycles after I simplified the equation above to Micros = Millis*1000 + 1000 – SysTick->VAL/72 by tracking down the cycles in simulator in KEIL
25 cycles is only 1/3 of one micro second, which is accurate enough to tell the system time in micro second. If you don’t call micros() frequently, It will save a lot of your code execution time for some other functions. I ended up made my floodfill saved 100us from 610 us to 510us at one certain cell by simulation all japan 2011 maze with same amount of computation, that was a pretty big surprise for me. It also saved 4us for my printf function. a lot progress, wasn’t it?
thank you for your information and specially for second part of context that is about making microseconds by NVIC and SYSTICK.
i have a question :
if i want to create 1us , how can i do with your second code?
i have problem with it and my program has get trapped in a loop and stays there.
can you guide me to do this?
i need to make 1us for driving ds18b20 by stm32f103.
thanx
what do you mean create a 1us? did you mean make a 1us delay?
use void delay_us(u32 nTime) to make a 1us delay
micros() only reads the current system time in us level accuracy.
where do you run the function to communicate with ds18b20? if you run your sensor reading function in a interrupt that has higher priority than systick, the delay_us will get jammed.
If it didn’t solve your problem, you need to provide more detailed information.
i have got a problem with microsecond function:
if we enter to this function we can not exit from it . in exact word i have problem with this line :
while((nTime-(Micros-curTime)) > 0);
this while has not work properly.
i use your second method for delay generation.
Hi
i try your first part code.this code works normally and true with frequencies more than 36 megahertz. i test it with a different range of frequencies and i realized that when we want to create a microsecond delay based on your code , SysTick_Config() doesnt return a suitable output for starting and got trapped in while(1); .
i need to create microsecond delays with 8M clock frequency
make sure the value “SystemCoreClock” is the desired system frequency. If you set it at 72HMz, the value should be 72000000
if SysTick_Config() trapped in while(1), it means the system clock wasn’t even properly set. If the external crystal/osc isn’t working properly, the chip will enable the internal RC(8MHz) and running at 1X PLL speed, therefore the actually system clock is 8×1=8MHz. Since you can use internal RC to make the MCU go up to 64MHz, you will need to scaled the internal RC by one half then PLL by 16 to reach 64MHz, where the 8Mhz external clock source set as 8 PLL by 9 to 72MHz.
If the SysTick_Config() is working and the delay is still not working, try to change the variable type for”nTime” from unsigned to signed.
for the line “Micros = Millis*1000 + 1000 – SysTick->VAL/72”, the value 72 means the system clock is at 72MHz. if you set your MCU clock at some other value, change the value 72 to the corresponding clock frequency.
thank you very much for replying
is this sentence true:
“SysTick_config() doesnt accept values like 8 or 10”
if my SystemCoreClock is 8 MHz , we have SysTick_config(8) to create 1us delay function and this function doesnt work properly.
i analyzed the SysTick_config() function :
static __INLINE uint32_t SysTick_Config(uint32_t ticks)
{
if (ticks > SysTick_LOAD_RELOAD_Msk) return (1); /* Reload value impossible */
SysTick->LOAD = (ticks & SysTick_LOAD_RELOAD_Msk) – 1; /* set reload register */
NVIC_SetPriority (SysTick_IRQn, (1<VAL = 0; /* Load the SysTick Counter Value */
SysTick->CTRL = SysTick_CTRL_CLKSOURCE_Msk |
SysTick_CTRL_TICKINT_Msk |
SysTick_CTRL_ENABLE_Msk; /* Enable SysTick IRQ and SysTick Timer */
return (0); /* Function successful */
}
we have ticks in input of function that systick_handler() works with it.
in the first line of function we have :
if (ticks > SysTick_LOAD_RELOAD_Msk) return (1); /* Reload value impossible */
it shows that if the ticks value is larger than a fixed value called “SysTick_LOAD_RELOAD_Msk)” the function returns 1 and we go to infinite loop.
i know that SysTick_LOAD_RELOAD_Msk) is 0x00FFFFFF .
i use stm32f103rbt6 that has a cortex_m3 core .
there is another question:
what is the meaning of this line :
NVIC_SetPriority (SysTick_IRQn, (1<<__NVIC_PRIO_BITS) – 1); /* set Priority for Cortex-M0 System Interrupts */
is it important to have a line like this in SysTick_Config function ?
is there any need to change this value for cortex_m3 core?
i use delay_us() for ds18b20 and when my SystemCoreClock is upper than 48MHz te function works properly because it needs precise delay_us() and when i changed SystemCoreClock to my desired value it doesnt work.
thanx a lot
according to your code, if the mcu runs at 8MHz, SysTick_config(8) here means systick timer will interrupt every 8 system cycles. However, it may take more than 8 cycles to enter, execute and exit systick handler. This could be the reason to cause jam.
I think you should use second method to make a 1us delay since the systick handler only interrupts every 1ms instead of 1us.
you should make sure something are working properly, such as you need to use USART to tell the actual value for SystemCoreClock to make sure it runs at desired speed.
The reason I put NVIC_SetPriority(SysTick_IRQn, 0); here is because I want systick timer runs at the highest priority so it won’t be interrupted by other timer(if any are being used) the default priority for systick is the lowest amount all timers.
You should call __disable_irq before reading the millis and micros variables that are accessed from inside the systick irq, and then __enable_irq. Otherwise you run the chance that you get wrong values since systick irq writes these variables. In most cases it may work just fine if compiler generates code that is atomic. But it is not a guarantee.
Do you test it? Where is the function “u32 millis(void)” called? In this code, it is never called. The variable Millis is always empty.
u32 millis(void) is used whenever you need the system time in milliseconds. It will be used in applications.
Thanks for your answer. I posted my modified code in the forum http://www.mikrocontroller.net/topic/416865 and we find a solution.
Greetings
Daniel
Great Jobs!
Thank you!
Hi,
thanks, it works.
I have replace
while((nTime-(Micros-curTime)) > 0);
by
while( micros()-curTime < nTime )
that I find easier to read (i'd say in natural language : while elapsed time < wanted delay).
Any reason you did it that way ?
Thanks again.
I did intentionally to make it more logical especially for others to read. I think the compiler might re-phrase it when compiling.
hi.
first of all thanks for the great idea.
I used your code to measure the execution time of fast functions, by taking the ticks from epoch (micros weren’t fast enough…) at the beginning and ending of a function and subtract the first from the later.
in the progress I found a bug in the code:
at times I got a negative value of ticks per function.
after a research I found that the problem is that it takes time for the interrupt to increase the value of Mills after SysTick->VAL overflowed.
according to my measurements Millis wasn’t updated for 1-6 microseconds (between 257 and 1206 ticks, I’m working with 216MHz…)
I solved my problem by just adding 1 miili to the tick value if it’s negative.
I couldn’t find a way to discover a wrong tick value in the ticksFromEpoch() function though.
here is my code, hope it’ll help someone:
uint64_t STM32TimeProvider::ticksFromEpoch()
{
uint64_t millis = (HAL_GetTick() + 1);
return millis * SystemCoreClock / 1000 – SysTick->VAL;
}
STM32StopWatch::STM32StopWatch(uint64_t& total, uint64_t& times)
: m_total(total), m_times(times)
{
m_start = m_clock.ticksFromEpoch();
++m_times;
}
STM32StopWatch::~STM32StopWatch()
{
int64_t diff = (m_clock.ticksFromEpoch() – m_start);
m_total += diff + ((diff < 0) * SystemCoreClock / 1000);
}
Thanks again.
Thanks for the post, I believe the delay_us should instead use the micros function ie:
void delay_us(uint32_t nTime)
{
uint32_t curTime = micros();
while((nTime-(micros()-curTime)) > 0);
}
In your version the Micros variable is not updated.
haha, thanks for pointing out. This post is too old to have my current code to be updated.
And my current code uses micros() instead. Same stuff though.
Thanks , what a brilliant way to implement the micros and millis functions.
I have been spent a few nights to figure out how to implement them on M0 series without consuming too much mcu resource.
And you did it !!
for me :>
Hi Green,
can you please tell how do you control the overflow situation? For e.g. the SysTick->VAL is overflown?
It won’t overflow.
whenever systick->val is maxed to your designated value, it triggers the interrupt and resets to start next counting cycle.