2 回答

TA貢獻2036條經驗 獲得超8個贊
我能夠找到 gccgo 在哪里要求這么多內存。它在 mallocinit 函數的 libgo/go/runtime/malloc.go 文件中:
// If we fail to allocate, try again with a smaller arena.
// This is necessary on Android L where we share a process
// with ART, which reserves virtual memory aggressively.
// In the worst case, fall back to a 0-sized initial arena,
// in the hope that subsequent reservations will succeed.
arenaSizes := [...]uintptr{
512 << 20,
256 << 20,
128 << 20,
0,
}
for _, arenaSize := range &arenaSizes {
// SysReserve treats the address we ask for, end, as a hint,
// not as an absolute requirement. If we ask for the end
// of the data segment but the operating system requires
// a little more space before we can start allocating, it will
// give out a slightly higher pointer. Except QEMU, which
// is buggy, as usual: it won't adjust the pointer upward.
// So adjust it upward a little bit ourselves: 1/4 MB to get
// away from the running binary image and then round up
// to a MB boundary.
p = round(getEnd()+(1<<18), 1<<20)
pSize = bitmapSize + spansSize + arenaSize + _PageSize
if p <= procBrk && procBrk < p+pSize {
// Move the start above the brk,
// leaving some room for future brk
// expansion.
p = round(procBrk+(1<<20), 1<<20)
}
p = uintptr(sysReserve(unsafe.Pointer(p), pSize, &reserved))
if p != 0 {
break
}
}
if p == 0 {
throw("runtime: cannot reserve arena virtual address space")
}
有趣的是,如果較大的競技場失敗,它會退回到較小的競技場。因此,限制 go 可執行文件可用的虛擬內存實際上會限制它成功分配的數量。
我能夠使用ulimit -v 327680將虛擬內存限制為較小的數字:
VmPeak: 300772 kB
VmSize: 300772 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 5712 kB
VmRSS: 5712 kB
VmData: 296276 kB
VmStk: 132 kB
VmExe: 2936 kB
VmLib: 0 kB
VmPTE: 56 kB
VmPMD: 0 kB
VmSwap: 0 kB
這些仍然是很大的數字,但是 gccgo 可執行文件可以達到的最好結果。所以問題的答案是,是的,你可以減少 gccgo 編譯的可執行文件的 VmData,但你真的不應該為此擔心。(在 64 位機器上,gccgo 嘗試分配 512 GB。)

TA貢獻1793條經驗 獲得超6個贊
可能的原因是您將庫鏈接到代碼中。我的猜測是,如果您要顯式鏈接到靜態庫,那么您將能夠獲得更小的邏輯地址空間,以便將最少的內容添加到您的可執行文件中。無論如何,擁有較大的邏輯地址空間的危害最小。
- 2 回答
- 0 關注
- 142 瀏覽
添加回答
舉報