900字范文,内容丰富有趣,生活中的好帮手!
900字范文 > Linux内核:Gigantic巨页与CMA的结合的PATCH补丁提交

Linux内核:Gigantic巨页与CMA的结合的PATCH补丁提交

时间:2023-10-22 06:43:01

相关推荐

Linux内核:Gigantic巨页与CMA的结合的PATCH补丁提交

目录

概述

完整的PATCH

推荐阅读

概述

Facebook的Roman Gushcin发送的这个patch把Gigantic巨页(SIZE:1GB)与CMA进行了一个完美的结合:

/lkml//3/9/1135

CMA有利于在开机的时候就预留一大片内存,但是这片内存如果不被cma_alloc()申请走,则可被movable的页面复用,并不会造成直接的浪费。

而Linux的Gigantic hugepage则要求能够在运行时通过

echo 10 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages

这样的方法能申请一定数量的1GB Gigantic巨页,由于运行时内存碎片化掉了,这种1GB的Gigantic巨页很可能申请不到。通过CMA的方法,则可以让这种申请在运行时成功。

所以整个故事是:

CMA比如预留4GB内存专门供给hugetlb,如果没有人去进行Gigantic巨页设置,则这个4GB就平时被applications的movable页面使用掉了。

如果有人通过

echo 1 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages

拿走1GB,则这1GB就被从CMA拿走,剩下的3GB仍然可以被movable page使用。

用户可以在开机的时候通过hugetlb_cma bootargs来设置CMA的大小,如果是NUMA架构的(假设有4个NUMA NODE),设置hugetlb_cma=4GB大小,则每个NUMA节点会分配到1GB大小的CMA。

从代码看起来,现在申请1GB的gigantic页面的时候,如果有这种CMA区域,是先走CMA区域的:

@@ -1237,6 +1246,23 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,{unsigned long nr_pages = 1UL << huge_page_order(h);+if (hugetlb_cma[0]) {+struct page *page;+int nid;++for_each_node_mask(nid, *nodemask) {+if (!hugetlb_cma[nid])+break;++page = cma_alloc(hugetlb_cma[nid], nr_pages,+ huge_page_order(h), true);+if (page)+return page;+}++return NULL;+}+return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask);}

释放的时候则是也先看有无这种CMA:

static void free_gigantic_page(struct page *page, unsigned int order){+if (hugetlb_cma[0]) {+cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order);+return;+}+free_contig_range(page_to_pfn(page), 1 << order);}

如果这种CMA根本不存在,还是会走到老的代码路径:

alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask);

free_contig_range(page_to_pfn(page), 1 << order);

完整的PATCH

Facebook的Roman Gushcin发送的这个patch把Gigantic巨页(SIZE:1GB)与CMA进行了一个完美的结合:

/lkml//3/9/1135

FromRoman Gushchin <>Subject[PATCH] mm: hugetlb: optionally allocate gigantic hugepages using cmaDateMon, 9 Mar 15:32:16 -0700Commit 944d9fec8d7a ("hugetlb: add support for gigantic page allocationat runtime") has added the run-time allocation of gigantic pages. Howeverit actually works only at early stages of the system loading, whenthe majority of memory is free. After some time the memory getsfragmented by non-movable pages, so the chances to find a contiguous1 GB block are getting close to zero. Even dropping caches manuallydoesn't help a lot.At large scale rebooting servers in order to allocate gigantic hugepagesis quite expensive and complex. At the same time keeping some constantpercentage of memory in reserved hugepages even if the workload isn'tusing it is a big waste: not all workloads can benefit from using 1 GBpages.The following solution can solve the problem:1) On boot time a dedicated cma area* is reserved. The size is passedas a kernel argument.2) Run-time allocations of gigantic hugepages are performed using thecma allocator and the dedicated cma areaIn this case gigantic hugepages can be allocated successfully with ahigh probability, however the memory isn't completely wasted if nobodyis using 1GB hugepages: it can be used for pagecache, anon memory,THPs, etc.* On a multi-node machine a per-node cma area is allocated on each node.Following gigantic hugetlb allocation are using the first availablenuma node if the mask isn't specified by a user.Usage:1) configure the kernel to allocate a cma area for hugetlb allocations:pass hugetlb_cma=10G as a kernel argument2) allocate hugetlb pages as usual, e.g.echo 10 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepagesIf the option isn't enabled or the allocation of the cma area failed,the current behavior of the system is preserved.Only x86 is covered by this patch, but it's trivial to extend it tocover other architectures as well.Signed-off-by: Roman Gushchin <guro@>---.../admin-guide/kernel-parameters.txt | 7 ++arch/x86/kernel/setup.c | 3 +include/linux/hugetlb.h | 2 +mm/hugetlb.c | 108 ++++++++++++++++++4 files changed, 120 insertions(+)diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txtindex 0c9894247015..d3349ec1dbef 100644--- a/Documentation/admin-guide/kernel-parameters.txt+++ b/Documentation/admin-guide/kernel-parameters.txt@@ -1452,6 +1452,13 @@hpet_mmap=[X86, HPET_MMAP] Allow userspace to mmap HPETregisters. Default set by CONFIG_HPET_MMAP_DEFAULT.+hugetlb_cma=[x86-64] The size of a cma area used for allocation+of gigantic hugepages.+Format: nn[GTPE] | nn%++If enabled, boot-time allocation of gigantic hugepages+is skipped.+hugepages=[HW,X86-32,IA-64] HugeTLB pages to allocate at boot.hugepagesz=[HW,IA-64,PPC,X86-64] The size of the HugeTLB pages.On x86-64 and powerpc, this option can be specifieddiff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.cindex a74262c71484..ceeb06ddfd41 100644--- a/arch/x86/kernel/setup.c+++ b/arch/x86/kernel/setup.c@@ -16,6 +16,7 @@#include <linux/pci.h>#include <linux/root_dev.h>#include <linux/sfi.h>+#include <linux/hugetlb.h>#include <linux/tboot.h>#include <linux/usb/xhci-dbgp.h>@@ -1158,6 +1159,8 @@ void __init setup_arch(char **cmdline_p)initmem_init();dma_contiguous_reserve(max_pfn_mapped << PAGE_SHIFT);+hugetlb_cma_reserve();+/** Reserve memory for crash kernel after SRAT is parsed so that it* won't consume hotpluggable memory.diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.hindex 50480d16bd33..50050c981ab9 100644--- a/include/linux/hugetlb.h+++ b/include/linux/hugetlb.h@@ -157,6 +157,8 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud);extern int sysctl_hugetlb_shm_group;extern struct list_head huge_boot_pages;+extern void __init hugetlb_cma_reserve(void);+/* arch callbacks */pte_t *huge_pte_alloc(struct mm_struct *mm,diff --git a/mm/hugetlb.c b/mm/hugetlb.cindex 7fb31750e670..f2e6e0a37263 100644--- a/mm/hugetlb.c+++ b/mm/hugetlb.c@@ -28,6 +28,7 @@#include <linux/jhash.h>#include <linux/numa.h>#include <linux/llist.h>+#include <linux/cma.h>#include <asm/page.h>#include <asm/pgtable.h>@@ -44,6 +45,9 @@int hugetlb_max_hstate __read_mostly;unsigned int default_hstate_idx;struct hstate hstates[HUGE_MAX_HSTATE];++static struct cma *hugetlb_cma[MAX_NUMNODES];+/** Minimum page order among possible hugepage sizes, set to a proper value* at boot time.@@ -1228,6 +1232,11 @@ static void destroy_compound_gigantic_page(struct page *page,static void free_gigantic_page(struct page *page, unsigned int order){+if (hugetlb_cma[0]) {+cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order);+return;+}+free_contig_range(page_to_pfn(page), 1 << order);}@@ -1237,6 +1246,23 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,{unsigned long nr_pages = 1UL << huge_page_order(h);+if (hugetlb_cma[0]) {+struct page *page;+int nid;++for_each_node_mask(nid, *nodemask) {+if (!hugetlb_cma[nid])+break;++page = cma_alloc(hugetlb_cma[nid], nr_pages,+ huge_page_order(h), true);+if (page)+return page;+}++return NULL;+}+return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask);}@@ -2439,6 +2465,10 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)for (i = 0; i < h->max_huge_pages; ++i) {if (hstate_is_gigantic(h)) {+if (hugetlb_cma[0]) {+pr_warn_once("HugeTLB: hugetlb_cma is enabled, skip boot time allocation\n");+break;+}if (!alloc_bootmem_huge_page(h))break;} else if (!alloc_pool_huge_page(h,@@ -5372,3 +5402,81 @@ void move_hugetlb_state(struct page *oldpage, struct page *newpage, int reason)spin_unlock(&hugetlb_lock);}}++static unsigned long hugetlb_cma_size __initdata;+static unsigned long hugetlb_cma_percent __initdata;++static int __init cmdline_parse_hugetlb_cma(char *p)+{+unsigned long long val;+char *endptr;++if (!p)+return -EINVAL;++/* Value may be a percentage of total memory, otherwise bytes */+val = simple_strtoull(p, &endptr, 0);+if (*endptr == '%')+hugetlb_cma_percent = clamp_t(unsigned long, val, 0, 100);+else+hugetlb_cma_size = memparse(p, &p);++return 0;+}++early_param("hugetlb_cma", cmdline_parse_hugetlb_cma);++void __init hugetlb_cma_reserve(void)+{+unsigned long totalpages = 0;+unsigned long start_pfn, end_pfn;+phys_addr_t size;+int nid, i, res;++if (!hugetlb_cma_size && !hugetlb_cma_percent)+return;++if (hugetlb_cma_percent) {+for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn,+ NULL)+totalpages += end_pfn - start_pfn;++size = PAGE_SIZE * (hugetlb_cma_percent * 100 * totalpages) /+10000UL;+} else {+size = hugetlb_cma_size;+}++pr_info("hugetlb_cma: reserve %llu, %llu per node\n", size,+size / nr_online_nodes);++size /= nr_online_nodes;++for_each_node_state(nid, N_ONLINE) {+unsigned long min_pfn = 0, max_pfn = 0;++for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {+if (!min_pfn)+min_pfn = start_pfn;+max_pfn = end_pfn;+}++res = cma_declare_contiguous(PFN_PHYS(min_pfn), size,+PFN_PHYS(max_pfn), (1UL << 30),+0, false,+"hugetlb", &hugetlb_cma[nid]);+if (res) {+pr_warn("hugetlb_cma: reservation failed: err %d, node %d, [%llu, %llu)",+res, nid, PFN_PHYS(min_pfn), PFN_PHYS(max_pfn));++for (; nid >= 0; nid--)+hugetlb_cma[nid] = NULL;++break;+}++pr_info("hugetlb_cma: successfully reserved %llu on node %d\n",+size, nid);+}+}+-- 2.24.1

推荐阅读

《“ CMA文档”》

如何摆脱CMA

《Aarch64 Kernel Memory Management》

《Linux内存管理:什么是CMA(contiguous memory allocation)?可与DMA结合使用》

《Gigantic巨页与CMA的完全结合》

《Linux之hugepage大页内存理论》

《大页内存的使用:大页内存分配与释放》

《HugeTLB Pages大页内存》

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。