cache 访问延迟背后的计算机原理
簡介:本文介紹如何測試多級 cache 的訪存延遲,以及背后蘊含的計算機原理。
CPU 的 cache 往往是分多級的金字塔模型,L1 最靠近 CPU,訪問延遲最小,但 cache 的容量也最小。本文介紹如何測試多級 cache 的訪存延遲,以及背后蘊含的計算機原理。
?圖源:Lab 4: Caching - HackMD
Cache Latency
Wikichip[1] 提供了不同 CPU 型號的 cache 延遲,單位一般為 cycle,通過簡單的運算,轉(zhuǎn)換為 ns。以 skylake 為例,CPU 各級 cache 延遲的基準值為:
CPU Frequency:2654MHz (0.3768 nanosec/clock)
設計實驗
1. naive thinking
申請一個 buffer,buffer size 為 cache 對應的大小,第一次遍歷進行預熱,將數(shù)據(jù)全部加載到 cache 中。第二次遍歷統(tǒng)計耗時,計算每次 read 的延遲平均值。
代碼實現(xiàn) mem-lat.c 如下:
#include <sys/types.h> #include <stdlib.h> #include <stdio.h> #include <sys/mman.h> #include <sys/time.h> #include <unistd.h>#define ONE p = (char **)*p; #define FIVE ONE ONE ONE ONE ONE #define TEN FIVE FIVE #define FIFTY TEN TEN TEN TEN TEN #define HUNDRED FIFTY FIFTYstatic void usage() {printf("Usage: ./mem-lat -b xxx -n xxx -s xxx\n");printf(" -b buffer size in KB\n");printf(" -n number of read\n\n");printf(" -s stride skipped before the next access\n\n");printf("Please don't use non-decimal based number\n"); }int main(int argc, char* argv[]) {unsigned long i, j, size, tmp;unsigned long memsize = 0x800000; /* 1/4 LLC size of skylake, 1/5 of broadwell */unsigned long count = 1048576; /* memsize / 64 * 8 */unsigned int stride = 64; /* skipped amount of memory before the next access */unsigned long sec, usec;struct timeval tv1, tv2;struct timezone tz;unsigned int *indices;while (argc-- > 0) {if ((*argv)[0] == '-') { /* look at first char of next */switch ((*argv)[1]) { /* look at second */case 'b':argv++;argc--;memsize = atoi(*argv) * 1024;break;case 'n':argv++;argc--;count = atoi(*argv);break;case 's':argv++;argc--;stride = atoi(*argv);break;default:usage();exit(1);break;}}argv++;}char* mem = mmap(NULL, memsize, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, -1, 0);// trick3: init pointer chasing, per stride=8 bytesize = memsize / stride;indices = malloc(size * sizeof(int));for (i = 0; i < size; i++)indices[i] = i;// trick 2: fill mem with pointer referencesfor (i = 0; i < size - 1; i++)*(char **)&mem[indices[i]*stride]= (char*)&mem[indices[i+1]*stride];*(char **)&mem[indices[size-1]*stride]= (char*)&mem[indices[0]*stride];char **p = (char **) mem;tmp = count / 100;gettimeofday (&tv1, &tz);for (i = 0; i < tmp; ++i) {HUNDRED; //trick 1}gettimeofday (&tv2, &tz);if (tv2.tv_usec < tv1.tv_usec) {usec = 1000000 + tv2.tv_usec - tv1.tv_usec;sec = tv2.tv_sec - tv1.tv_sec - 1;} else {usec = tv2.tv_usec - tv1.tv_usec;sec = tv2.tv_sec - tv1.tv_sec;}printf("Buffer size: %ld KB, stride %d, time %d.%06d s, latency %.2f ns\n", memsize/1024, stride, sec, usec, (sec * 1000000 + usec) * 1000.0 / (tmp *100));munmap(mem, memsize);free(indices); }這里用到了 3 個小技巧:
- HUNDRED 宏:通過宏展開,盡可能避免其他指令對訪存的干擾。
- 二級指針:通過二級指針將buffer串起來,避免訪存時計算偏移。
- char* 和 char** 為 8 字節(jié),因此,stride 為 8。
測試方法:
#set -xwork=./mem-lat buffer_size=1 stride=8for i in `seq 1 15`; dotaskset -ac 0 $work -b $buffer_size -s $stridebuffer_size=$(($buffer_size*2)) done測試結(jié)果如下:
//L1 Buffer size: 1 KB, stride 8, time 0.003921 s, latency 3.74 ns Buffer size: 2 KB, stride 8, time 0.003928 s, latency 3.75 ns Buffer size: 4 KB, stride 8, time 0.003935 s, latency 3.75 ns Buffer size: 8 KB, stride 8, time 0.003926 s, latency 3.74 ns Buffer size: 16 KB, stride 8, time 0.003942 s, latency 3.76 ns Buffer size: 32 KB, stride 8, time 0.003963 s, latency 3.78 ns //L2 Buffer size: 64 KB, stride 8, time 0.004043 s, latency 3.86 ns Buffer size: 128 KB, stride 8, time 0.004054 s, latency 3.87 ns Buffer size: 256 KB, stride 8, time 0.004051 s, latency 3.86 ns Buffer size: 512 KB, stride 8, time 0.004049 s, latency 3.86 ns Buffer size: 1024 KB, stride 8, time 0.004110 s, latency 3.92 ns //L3 Buffer size: 2048 KB, stride 8, time 0.004126 s, latency 3.94 ns Buffer size: 4096 KB, stride 8, time 0.004161 s, latency 3.97 ns Buffer size: 8192 KB, stride 8, time 0.004313 s, latency 4.11 ns Buffer size: 16384 KB, stride 8, time 0.004272 s, latency 4.07 ns相比基準值,L1 延遲偏大,L2 和 L3 延遲偏小,不符合預期。
2. thinking with hardware: cache line
現(xiàn)代處理器,內(nèi)存以 cache line 為粒度,組織在 cache 中。訪存的讀寫粒度都是一個 cache line,最常見的緩存線大小是 64 字節(jié)。
如果我們簡單的以 8 字節(jié)為粒度,順序讀取 128KB 的 buffer,假設數(shù)據(jù)命中的是 L2,那么數(shù)據(jù)就會被緩存到 L1,一個 cache line 其他的訪存操作都只會命中 L1,從而導致我們測量的 L2 延遲明顯偏小。
本文測試的 CPU,cacheline 大小 64 字節(jié),只需將 stride 設為 64。
測試結(jié)果如下:
//L1 Buffer size: 1 KB, stride 64, time 0.003933 s, latency 3.75 ns Buffer size: 2 KB, stride 64, time 0.003930 s, latency 3.75 ns Buffer size: 4 KB, stride 64, time 0.003925 s, latency 3.74 ns Buffer size: 8 KB, stride 64, time 0.003931 s, latency 3.75 ns Buffer size: 16 KB, stride 64, time 0.003935 s, latency 3.75 ns Buffer size: 32 KB, stride 64, time 0.004115 s, latency 3.92 ns //L2 Buffer size: 64 KB, stride 64, time 0.007423 s, latency 7.08 ns Buffer size: 128 KB, stride 64, time 0.007414 s, latency 7.07 ns Buffer size: 256 KB, stride 64, time 0.007437 s, latency 7.09 ns Buffer size: 512 KB, stride 64, time 0.007429 s, latency 7.09 ns Buffer size: 1024 KB, stride 64, time 0.007650 s, latency 7.30 ns Buffer size: 2048 KB, stride 64, time 0.007670 s, latency 7.32 ns //L3 Buffer size: 4096 KB, stride 64, time 0.007695 s, latency 7.34 ns Buffer size: 8192 KB, stride 64, time 0.007786 s, latency 7.43 ns Buffer size: 16384 KB, stride 64, time 0.008172 s, latency 7.79 ns雖然相比方案 1,L2 和 L3 的延遲有所增大,但還是不符合預期。
3. thinking with hardware: prefetch
現(xiàn)代處理器,通常支持預取(prefetch)。數(shù)據(jù)預取通過將代碼中后續(xù)可能使用到的數(shù)據(jù)提前加載到 cache 中,減少 CPU 等待數(shù)據(jù)從內(nèi)存中加載的時間,提升 cache 命中率,進而提升軟件的運行效率。
Intel 處理器支持 4 種硬件預取 [2],可以通過 MSR 控制關(guān)閉和打開:
這里我們簡單的將 stride 設為 128 和 256,避免硬件預取。測試的 L3 訪存延遲明顯增大:
// stride 128 Buffer size: 1 KB, stride 256, time 0.003927 s, latency 3.75 ns Buffer size: 2 KB, stride 256, time 0.003924 s, latency 3.74 ns Buffer size: 4 KB, stride 256, time 0.003928 s, latency 3.75 ns Buffer size: 8 KB, stride 256, time 0.003923 s, latency 3.74 ns Buffer size: 16 KB, stride 256, time 0.003930 s, latency 3.75 ns Buffer size: 32 KB, stride 256, time 0.003929 s, latency 3.75 ns Buffer size: 64 KB, stride 256, time 0.007534 s, latency 7.19 ns Buffer size: 128 KB, stride 256, time 0.007462 s, latency 7.12 ns Buffer size: 256 KB, stride 256, time 0.007479 s, latency 7.13 ns Buffer size: 512 KB, stride 256, time 0.007698 s, latency 7.34 ns Buffer size: 512 KB, stride 128, time 0.007597 s, latency 7.25 ns Buffer size: 1024 KB, stride 128, time 0.009169 s, latency 8.74 ns Buffer size: 2048 KB, stride 128, time 0.010008 s, latency 9.55 ns Buffer size: 4096 KB, stride 128, time 0.010008 s, latency 9.55 ns Buffer size: 8192 KB, stride 128, time 0.010366 s, latency 9.89 ns Buffer size: 16384 KB, stride 128, time 0.012031 s, latency 11.47 ns// stride 256 Buffer size: 512 KB, stride 256, time 0.007698 s, latency 7.34 ns Buffer size: 1024 KB, stride 256, time 0.012654 s, latency 12.07 ns Buffer size: 2048 KB, stride 256, time 0.025210 s, latency 24.04 ns Buffer size: 4096 KB, stride 256, time 0.025466 s, latency 24.29 ns Buffer size: 8192 KB, stride 256, time 0.025840 s, latency 24.64 ns Buffer size: 16384 KB, stride 256, time 0.027442 s, latency 26.17 nsL3 的訪存延遲基本上是符合預期的,但是 L1 和 L2 明顯偏大。
如果測試隨機訪存延遲,更加通用的做法是,在將buffer指針串起來時,隨機化一下。
// shuffle indicesfor (i = 0; i < size; i++) {j = i + rand() % (size - i);if (i != j) {tmp = indices[i];indices[i] = indices[j];indices[j] = tmp;}}可以看到,測試結(jié)果與 stride 為 256 基本上是一樣的。
Buffer size: 1 KB, stride 64, time 0.003942 s, latency 3.76 ns Buffer size: 2 KB, stride 64, time 0.003925 s, latency 3.74 ns Buffer size: 4 KB, stride 64, time 0.003928 s, latency 3.75 ns Buffer size: 8 KB, stride 64, time 0.003931 s, latency 3.75 ns Buffer size: 16 KB, stride 64, time 0.003932 s, latency 3.75 ns Buffer size: 32 KB, stride 64, time 0.004276 s, latency 4.08 ns Buffer size: 64 KB, stride 64, time 0.007465 s, latency 7.12 ns Buffer size: 128 KB, stride 64, time 0.007470 s, latency 7.12 ns Buffer size: 256 KB, stride 64, time 0.007521 s, latency 7.17 ns Buffer size: 512 KB, stride 64, time 0.009340 s, latency 8.91 ns Buffer size: 1024 KB, stride 64, time 0.015230 s, latency 14.53 ns Buffer size: 2048 KB, stride 64, time 0.027567 s, latency 26.29 ns Buffer size: 4096 KB, stride 64, time 0.027853 s, latency 26.56 ns Buffer size: 8192 KB, stride 64, time 0.029945 s, latency 28.56 ns Buffer size: 16384 KB, stride 64, time 0.034878 s, latency 33.26 ns4. thinking with compiler: register keyword
解決掉 L3 偏小的問題后,我們繼續(xù)看 L1 和 L2 偏大的原因。為了找出偏大的原因,我們先反匯編可執(zhí)行程序,看看執(zhí)行的匯編指令是否是我們想要的:
objdump -D -S mem-lat > mem-lat.s為卡片添加間距 ? ? ? ?
刪除卡片
- -D: Display assembler contents of all sections.
- -S:Intermix source code with disassembly. (gcc編譯時需使用-g,生成調(diào)式信息)
生成的匯編文件 mem-lat.s:
char **p = (char **)mem;400b3a: 48 8b 45 c8 mov -0x38(%rbp),%rax400b3e: 48 89 45 d0 mov %rax,-0x30(%rbp) // push stack//... HUNDRED;400b85: 48 8b 45 d0 mov -0x30(%rbp),%rax400b89: 48 8b 00 mov (%rax),%rax400b8c: 48 89 45 d0 mov %rax,-0x30(%rbp)400b90: 48 8b 45 d0 mov -0x30(%rbp),%rax400b94: 48 8b 00 mov (%rax),%rax首先,變量 mem 賦值給變量 p,變量 p 壓入棧-0x30(%rbp)。
char **p = (char **)mem;400b3a: 48 8b 45 c8 mov -0x38(%rbp),%rax400b3e: 48 89 45 d0 mov %rax,-0x30(%rbp)訪存的邏輯:
HUNDRED; // p = (char **)*p400b85: 48 8b 45 d0 mov -0x30(%rbp),%rax400b89: 48 8b 00 mov (%rax),%rax400b8c: 48 89 45 d0 mov %rax,-0x30(%rbp)- 先從棧中讀取指針變量 p 的值到rax寄存器(變量 p 的類型為char **,是一個二級指針,也就是說,指針 p 指向一個char *的變量,即 p 的值也是一個地址)。下圖中變量 p 的值為 0x2000。
- 將rax寄存器指向變量的值讀入rax寄存器,對應單目運算*p。下圖中地址 0x2000的值為 0x3000,rax 更新為 0x3000。
- 將rax寄存器賦值給變量p。下圖中變量p的值更新為0x3000。
根據(jù)反匯編的結(jié)果可以看到,期望的 1 條 move 指令被編譯成了 3 條,cache 的延遲也就增加了 3 倍。
C 語言的 register 關(guān)鍵字,可以讓編譯器將變量保存到寄存器中,從而避免每次從棧中讀取的開銷。
It's a hint to the compiler that the variable will be heavily used and that you recommend it be kept in a processor register if possible.
我們在聲明 p 時,加上 register 關(guān)鍵字。
register char **p = (char **)mem;測試結(jié)果如下:
// L1 Buffer size: 1 KB, stride 64, time 0.000030 s, latency 0.03 ns Buffer size: 2 KB, stride 64, time 0.000029 s, latency 0.03 ns Buffer size: 4 KB, stride 64, time 0.000030 s, latency 0.03 ns Buffer size: 8 KB, stride 64, time 0.000030 s, latency 0.03 ns Buffer size: 16 KB, stride 64, time 0.000030 s, latency 0.03 ns Buffer size: 32 KB, stride 64, time 0.000030 s, latency 0.03 ns // L2 Buffer size: 64 KB, stride 64, time 0.000030 s, latency 0.03 ns Buffer size: 128 KB, stride 64, time 0.000030 s, latency 0.03 ns Buffer size: 256 KB, stride 64, time 0.000029 s, latency 0.03 ns Buffer size: 512 KB, stride 64, time 0.000030 s, latency 0.03 ns Buffer size: 1024 KB, stride 64, time 0.000030 s, latency 0.03 ns // L3 Buffer size: 2048 KB, stride 64, time 0.000030 s, latency 0.03 ns Buffer size: 4096 KB, stride 64, time 0.000029 s, latency 0.03 ns Buffer size: 8192 KB, stride 64, time 0.000030 s, latency 0.03 ns Buffer size: 16384 KB, stride 64, time 0.000030 s, latency 0.03 ns訪存延遲全部變?yōu)椴蛔?1 ns,明顯不符合預期。
5. thinking with compiler: Touch it!
重新反匯編,看看哪里出了問題,編譯代碼如下:
for (i = 0; i < tmp; ++i) {40155e: 48 c7 45 f8 00 00 00 movq $0x0,-0x8(%rbp)401565: 00401566: eb 05 jmp 40156d <main+0x37e>401568: 48 83 45 f8 01 addq $0x1,-0x8(%rbp)40156d: 48 8b 45 f8 mov -0x8(%rbp),%rax401571: 48 3b 45 b0 cmp -0x50(%rbp),%rax401575: 72 f1 jb 401568 <main+0x379>HUNDRED;}gettimeofday (&tv2, &tz);401577: 48 8d 95 78 ff ff ff lea -0x88(%rbp),%rdx40157e: 48 8d 45 80 lea -0x80(%rbp),%rax401582: 48 89 d6 mov %rdx,%rsi401585: 48 89 c7 mov %rax,%rdi401588: e8 e3 fa ff ff callq 401070 <gettimeofday@plt>HUNDRED 宏沒有產(chǎn)生任何匯編代碼。涉及到變量 p 的語句,并沒有實際作用,只是數(shù)據(jù)讀取,大概率被編譯器優(yōu)化掉了。
register char **p = (char **) mem;tmp = count / 100;gettimeofday (&tv1, &tz);for (i = 0; i < tmp; ++i) {HUNDRED;}gettimeofday (&tv2, &tz);/* touch pointer p to prevent compiler optimization */char **touch = p;反匯編驗證一下:
HUNDRED;401570: 48 8b 1b mov (%rbx),%rbx401573: 48 8b 1b mov (%rbx),%rbx401576: 48 8b 1b mov (%rbx),%rbx401579: 48 8b 1b mov (%rbx),%rbx40157c: 48 8b 1b mov (%rbx),%rbxHUNDRED 宏產(chǎn)生的匯編代碼只有操作寄存器 rbx 的 mov 指令,高級。
延遲的測試結(jié)果如下:
// L1 Buffer size: 1 KB, stride 64, time 0.001687 s, latency 1.61 ns Buffer size: 2 KB, stride 64, time 0.001684 s, latency 1.61 ns Buffer size: 4 KB, stride 64, time 0.001682 s, latency 1.60 ns Buffer size: 8 KB, stride 64, time 0.001693 s, latency 1.61 ns Buffer size: 16 KB, stride 64, time 0.001683 s, latency 1.61 ns Buffer size: 32 KB, stride 64, time 0.001783 s, latency 1.70 ns // L2 Buffer size: 64 KB, stride 64, time 0.005896 s, latency 5.62 ns Buffer size: 128 KB, stride 64, time 0.005915 s, latency 5.64 ns Buffer size: 256 KB, stride 64, time 0.005955 s, latency 5.68 ns Buffer size: 512 KB, stride 64, time 0.007856 s, latency 7.49 ns Buffer size: 1024 KB, stride 64, time 0.014929 s, latency 14.24 ns // L3 Buffer size: 2048 KB, stride 64, time 0.026970 s, latency 25.72 ns Buffer size: 4096 KB, stride 64, time 0.026968 s, latency 25.72 ns Buffer size: 8192 KB, stride 64, time 0.028823 s, latency 27.49 ns Buffer size: 16384 KB, stride 64, time 0.033325 s, latency 31.78 nsL1 延遲 1.61 ns,L2 延遲 5.62 ns,終于,符合預期!
寫在最后
本文的思路和代碼參考自 lmbench[3],和團隊內(nèi)其他同學的工具 mem-lat。最后給自己挖個坑,在隨機化 buffer 指針時,沒有考慮硬件 TLB miss 的影響,如果有讀者有興趣,待日后有空補充。
原文鏈接
本文為阿里云原創(chuàng)內(nèi)容,未經(jīng)允許不得轉(zhuǎn)載。?
總結(jié)
以上是生活随笔為你收集整理的cache 访问延迟背后的计算机原理的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 成功通航:用宜搭提升数字化管理效能,确保
- 下一篇: 借助钉钉宜搭,奶茶店开始用黑科技管理门店