![]() So a combination of TLB with Hugepages reduces the time it takes to translate a virtual page address to a physical page address and to lookup for and so again it will increase performance. So next time accessing the same page will be first handled by TLB (which is fast) and then by MMU.Īs TLB is a hardware cache, it has a limited number of entries, so a large number of pages will slow down the application. When there is address translation from virtual memory to physical memory, translation is calculated in MMU, and this mapping is stored in the TLB. Memory Management Unit uses one additional hardware cache – Translation Lookaside Buffers (TLB). They can reduce the number of pages to be lookup for and usage of huge pages increases performance. Fortunately, nowadays CPUs support bigger pages – so-called Hugepages. Of course, this leads to performance slowdown. For example, when the process needs 1GB of memory, this leads to more than 200k of pages in the page table which the CPU has to lookup for. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |