How to optimise the delays in NVM Operations in AUTOSAR Projects
- AutoEconnect Sessions
- 5 days ago
- 3 min read
How NVM Prioritisation Works:
In AUTOSAR NVM (Non-Volatile Memory) configuration, NVM Block priority levels range from 0 (lowest) to 255 (highest)
Typically, NVM operations follow these rules:
1. Higher priority blocks are served first, but lower priority blocks are not preempted.
2. Once an NVM job starts, it must finish before another job can start.
3. Background Queue Processing: If lower-priority blocks are being written frequently, higher-priority blocks may have to wait for free cycles.
Let's consider this following use case scenario which was queried by one of our website visitor:
Query from AutoEConnect Visitor:
I have a query regarding NVM Prioritizations. Suppose Block A has priority 15, and writing to NVM every 10ms and the Fault memories block has priority 127.
I want to do clear fault memories through $14 FF FF FF, in this case the time taken to give positive response is taking more than 7secs.
As Block A have high priority it's serving in prior and mean time when clear fault request came , ECU is taking the access of Fault block temporarily and need to serve the 10ms task of updating Block A , again come back to fault memory.
Is my understanding is correct ?
Response from AutoEConnect Team:
Your understanding is partially correct, but there are additional factors at play that can contribute to the 7-second delay when clearing fault memory.
How NVM Prioritization Works:
In AUTOSAR NVM (Non-Volatile Memory), priority levels range from 0 (lowest) to 255 (highest), meaning:
• Block A (Priority = 15) → Lower priority than Fault Memory Block.
• Fault Memory Block (Priority = 127) → Higher priority.
Typically, NVM operations follow these rules:
1. Higher priority blocks are served first, but lower priority blocks are not preempted.
2. Once an NVM job starts (like writing Block A every 10ms), it must finish before another job can start.
3. Background Queue Processing: If lower-priority blocks are being written frequently, higher-priority blocks may have to wait for free cycles.
Why is Clear Fault Memory Taking ~7s?
When you send the $14 FF FF FF (Clear DTC) request:
1. ECU temporarily accesses the fault memory block to erase fault data.
2. However, Block A is updating every 10ms, meaning it keeps adding new write jobs to the NVM queue.
3. NVM handles requests sequentially (not preemptive):
• NVM starts writing Block A.
• In between, it processes the Clear Fault Memory request when there is a free slot.
• But every 10ms, Block A gets another write request, pushing Clear Fault Memory further down the queue.
4. This cycle causes significant delays (since Block A keeps getting scheduled, starving the fault memory clear request).
5. If Block A has fast write cycles, it could slow down the execution of higher-priority requests like clearing faults.
How to Reduce the 7-Second Delay?
To improve response time, consider these optimizations:
✅ Reduce the Frequency of Block A Writes (Increase 10ms to a higher interval like 100ms or more)
• Reducing the NVM load will give more execution time to the Clear Fault operation.
✅ Use Asynchronous Writes Instead of Synchronous Writes
• If Block A is writing synchronously, it could block other operations. Switching to asynchronous NVM writes might help.
✅ Check for Queue Overflow Issues
• If too many Block A writes are queued, Clear Fault Memory might be waiting for NVM buffer availability.
✅ Use Immediate Priority for Fault Memory Erase (NvM_InvalidateNvBlock())
• If your NVM implementation supports immediate erase, it can prioritize clearing fault memory faster.
✅ Increase Fault Memory Block Priority (if configurable)
• If Block A and Fault Memory have similar priorities, the fault block might still be waiting too long.
Conclusion:
Yes, your understanding is partially correct—Block A is constantly updating, so the Clear Fault Memory operation has to wait for free NVM cycles.
However, the issue is not just priority but also frequent writes (10ms updates) clogging the NVM queue.
Optimising Block A’s write strategy can significantly reduce the 7-second delay.

Comments