diff options
author | jc_gargma <jc_gargma@iserlohn-fortress.net> | 2022-12-07 15:19:35 -0800 |
---|---|---|
committer | jc_gargma <jc_gargma@iserlohn-fortress.net> | 2022-12-07 15:19:35 -0800 |
commit | 0a6b09fcef4251c01d3c3cedd7e70c14330dc4f1 (patch) | |
tree | e1d7adf581bdf1ffb2688313a2d6a01d340ef0ad | |
parent | Updated to 6.0.9 (diff) | |
download | linux-0a6b09fcef4251c01d3c3cedd7e70c14330dc4f1.tar.xz |
Updated to 6.0.11
-rw-r--r-- | 0001-ZEN-Add-sysctl-and-CONFIG-to-disallow-unprivileged-C.patch | 4 | ||||
-rw-r--r-- | 0002-mm-vmscan-fix-extreme-overreclaim-and-swap-floods.patch | 137 | ||||
-rw-r--r-- | 0002-soundwire-intel-Initialize-clock-stop-timeout.patch (renamed from 0003-soundwire-intel-Initialize-clock-stop-timeout.patch) | 4 | ||||
-rw-r--r-- | 0003-drm-sched-add-DRM_SCHED_FENCE_DONT_PIPELINE-flag.patch (renamed from 0004-drm-sched-add-DRM_SCHED_FENCE_DONT_PIPELINE-flag.patch) | 8 | ||||
-rw-r--r-- | 0004-drm-amdgpu-use-DRM_SCHED_FENCE_DONT_PIPELINE-for-VM-.patch (renamed from 0005-drm-amdgpu-use-DRM_SCHED_FENCE_DONT_PIPELINE-for-VM-.patch) | 4 | ||||
-rw-r--r-- | 0005-drm-i915-improve-the-catch-all-evict-to-handle-lock-.patch | 239 | ||||
-rw-r--r-- | PKGBUILD | 46 | ||||
-rw-r--r-- | config | 3 |
8 files changed, 274 insertions, 171 deletions
diff --git a/0001-ZEN-Add-sysctl-and-CONFIG-to-disallow-unprivileged-C.patch b/0001-ZEN-Add-sysctl-and-CONFIG-to-disallow-unprivileged-C.patch index c4e31a7..3ff5f1e 100644 --- a/0001-ZEN-Add-sysctl-and-CONFIG-to-disallow-unprivileged-C.patch +++ b/0001-ZEN-Add-sysctl-and-CONFIG-to-disallow-unprivileged-C.patch @@ -1,4 +1,4 @@ -From 49992931eca40344a831396bef4652a8fc7d70c2 Mon Sep 17 00:00:00 2001 +From 400ac2ce023cd844f421a9781024fd89173388ac Mon Sep 17 00:00:00 2001 From: "Jan Alexander Steffens (heftig)" <jan.steffens@gmail.com> Date: Mon, 16 Sep 2019 04:53:20 +0200 Subject: [PATCH 1/6] ZEN: Add sysctl and CONFIG to disallow unprivileged @@ -36,7 +36,7 @@ index 33a4240e6a6f..82213f9c4c17 100644 { return &init_user_ns; diff --git a/init/Kconfig b/init/Kconfig -index 532362fcfe31..f13bb9f371a2 100644 +index d1d779d6ba43..bd90c221090d 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1241,6 +1241,22 @@ config USER_NS diff --git a/0002-mm-vmscan-fix-extreme-overreclaim-and-swap-floods.patch b/0002-mm-vmscan-fix-extreme-overreclaim-and-swap-floods.patch deleted file mode 100644 index d9f6259..0000000 --- a/0002-mm-vmscan-fix-extreme-overreclaim-and-swap-floods.patch +++ /dev/null @@ -1,137 +0,0 @@ -From 063f9088d320205657d15dbef4cbef78b4780ea7 Mon Sep 17 00:00:00 2001 -From: Johannes Weiner <hannes@cmpxchg.org> -Date: Tue, 2 Aug 2022 12:28:11 -0400 -Subject: [PATCH 2/6] mm: vmscan: fix extreme overreclaim and swap floods - -During proactive reclaim, we sometimes observe severe overreclaim, with -several thousand times more pages reclaimed than requested. - -This trace was obtained from shrink_lruvec() during such an instance: - - prio:0 anon_cost:1141521 file_cost:7767 - nr_reclaimed:4387406 nr_to_reclaim:1047 (or_factor:4190) - nr=[7161123 345 578 1111] - -While he reclaimer requested 4M, vmscan reclaimed close to 16G, most of it -by swapping. These requests take over a minute, during which the write() -to memory.reclaim is unkillably stuck inside the kernel. - -Digging into the source, this is caused by the proportional reclaim -bailout logic. This code tries to resolve a fundamental conflict: to -reclaim roughly what was requested, while also aging all LRUs fairly and -in accordance to their size, swappiness, refault rates etc. The way it -attempts fairness is that once the reclaim goal has been reached, it stops -scanning the LRUs with the smaller remaining scan targets, and adjusts the -remainder of the bigger LRUs according to how much of the smaller LRUs was -scanned. It then finishes scanning that remainder regardless of the -reclaim goal. - -This works fine if priority levels are low and the LRU lists are -comparable in size. However, in this instance, the cgroup that is -targeted by proactive reclaim has almost no files left - they've already -been squeezed out by proactive reclaim earlier - and the remaining anon -pages are hot. Anon rotations cause the priority level to drop to 0, -which results in reclaim targeting all of anon (a lot) and all of file -(almost nothing). By the time reclaim decides to bail, it has scanned -most or all of the file target, and therefor must also scan most or all of -the enormous anon target. This target is thousands of times larger than -the reclaim goal, thus causing the overreclaim. - -The bailout code hasn't changed in years, why is this failing now? The -most likely explanations are two other recent changes in anon reclaim: - -1. Before the series starting with commit 5df741963d52 ("mm: fix LRU - balancing effect of new transparent huge pages"), the VM was - overall relatively reluctant to swap at all, even if swap was - configured. This means the LRU balancing code didn't come into play - as often as it does now, and mostly in high pressure situations - where pronounced swap activity wouldn't be as surprising. - -2. For historic reasons, shrink_lruvec() loops on the scan targets of - all LRU lists except the active anon one, meaning it would bail if - the only remaining pages to scan were active anon - even if there - were a lot of them. - - Before the series starting with commit ccc5dc67340c ("mm/vmscan: - make active/inactive ratio as 1:1 for anon lru"), most anon pages - would live on the active LRU; the inactive one would contain only a - handful of preselected reclaim candidates. After the series, anon - gets aged similarly to file, and the inactive list is the default - for new anon pages as well, making it often the much bigger list. - - As a result, the VM is now more likely to actually finish large - anon targets than before. - -Change the code such that only one SWAP_CLUSTER_MAX-sized nudge toward the -larger LRU lists is made before bailing out on a met reclaim goal. - -This fixes the extreme overreclaim problem. - -Fairness is more subtle and harder to evaluate. No obvious misbehavior -was observed on the test workload, in any case. Conceptually, fairness -should primarily be a cumulative effect from regular, lower priority -scans. Once the VM is in trouble and needs to escalate scan targets to -make forward progress, fairness needs to take a backseat. This is also -acknowledged by the myriad exceptions in get_scan_count(). This patch -makes fairness decrease gradually, as it keeps fairness work static over -increasing priority levels with growing scan targets. This should make -more sense - although we may have to re-visit the exact values. - -Link: https://lkml.kernel.org/r/20220802162811.39216-1-hannes@cmpxchg.org -Signed-off-by: Johannes Weiner <hannes@cmpxchg.org> -Reviewed-by: Rik van Riel <riel@surriel.com> -Acked-by: Mel Gorman <mgorman@techsingularity.net> -Cc: Hugh Dickins <hughd@google.com> -Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> -Cc: <stable@vger.kernel.org> -Signed-off-by: Andrew Morton <akpm@linux-foundation.org> ---- - mm/vmscan.c | 10 ++++------ - 1 file changed, 4 insertions(+), 6 deletions(-) - -diff --git a/mm/vmscan.c b/mm/vmscan.c -index 382dbe97329f..266eb8cfe93a 100644 ---- a/mm/vmscan.c -+++ b/mm/vmscan.c -@@ -2955,8 +2955,8 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) - enum lru_list lru; - unsigned long nr_reclaimed = 0; - unsigned long nr_to_reclaim = sc->nr_to_reclaim; -+ bool proportional_reclaim; - struct blk_plug plug; -- bool scan_adjusted; - - get_scan_count(lruvec, sc, nr); - -@@ -2974,8 +2974,8 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) - * abort proportional reclaim if either the file or anon lru has already - * dropped to zero at the first pass. - */ -- scan_adjusted = (!cgroup_reclaim(sc) && !current_is_kswapd() && -- sc->priority == DEF_PRIORITY); -+ proportional_reclaim = (!cgroup_reclaim(sc) && !current_is_kswapd() && -+ sc->priority == DEF_PRIORITY); - - blk_start_plug(&plug); - while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] || -@@ -2995,7 +2995,7 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) - - cond_resched(); - -- if (nr_reclaimed < nr_to_reclaim || scan_adjusted) -+ if (nr_reclaimed < nr_to_reclaim || proportional_reclaim) - continue; - - /* -@@ -3046,8 +3046,6 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) - nr_scanned = targets[lru] - nr[lru]; - nr[lru] = targets[lru] * (100 - percentage) / 100; - nr[lru] -= min(nr[lru], nr_scanned); -- -- scan_adjusted = true; - } - blk_finish_plug(&plug); - sc->nr_reclaimed += nr_reclaimed; --- -2.38.1 - diff --git a/0003-soundwire-intel-Initialize-clock-stop-timeout.patch b/0002-soundwire-intel-Initialize-clock-stop-timeout.patch index 64b1447..ded86ff 100644 --- a/0003-soundwire-intel-Initialize-clock-stop-timeout.patch +++ b/0002-soundwire-intel-Initialize-clock-stop-timeout.patch @@ -1,7 +1,7 @@ -From aa0c728c5142d9306b0e7b2ace8b42fbd0c0291f Mon Sep 17 00:00:00 2001 +From fe7004128c90077116d07de74ee43ad1270ce3e7 Mon Sep 17 00:00:00 2001 From: Sjoerd Simons <sjoerd@collabora.com> Date: Sat, 8 Oct 2022 21:57:51 +0200 -Subject: [PATCH 3/6] soundwire: intel: Initialize clock stop timeout +Subject: [PATCH 2/6] soundwire: intel: Initialize clock stop timeout The bus->clk_stop_timeout member is only initialized to a non-zero value during the codec driver probe. This can lead to corner cases where this diff --git a/0004-drm-sched-add-DRM_SCHED_FENCE_DONT_PIPELINE-flag.patch b/0003-drm-sched-add-DRM_SCHED_FENCE_DONT_PIPELINE-flag.patch index 4bbf2b7..04d18c0 100644 --- a/0004-drm-sched-add-DRM_SCHED_FENCE_DONT_PIPELINE-flag.patch +++ b/0003-drm-sched-add-DRM_SCHED_FENCE_DONT_PIPELINE-flag.patch @@ -1,7 +1,7 @@ -From b17439d6e0fb4ce52f5da66f55a250cced4fd4a8 Mon Sep 17 00:00:00 2001 +From 95123f7acd2c486547a808631ea879eb5782738d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christian=20K=C3=B6nig?= <christian.koenig@amd.com> Date: Fri, 7 Oct 2022 09:51:13 +0200 -Subject: [PATCH 4/6] drm/sched: add DRM_SCHED_FENCE_DONT_PIPELINE flag +Subject: [PATCH 3/6] drm/sched: add DRM_SCHED_FENCE_DONT_PIPELINE flag MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @@ -17,10 +17,10 @@ Signed-off-by: Christian König <christian.koenig@amd.com> 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c -index 6b25b2f4f5a3..6137537aaea4 100644 +index 7ef1a086a6fb..4b913dbb7d7b 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c -@@ -385,7 +385,8 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) +@@ -389,7 +389,8 @@ static bool drm_sched_entity_add_dependency_cb(struct drm_sched_entity *entity) } s_fence = to_drm_sched_fence(fence); diff --git a/0005-drm-amdgpu-use-DRM_SCHED_FENCE_DONT_PIPELINE-for-VM-.patch b/0004-drm-amdgpu-use-DRM_SCHED_FENCE_DONT_PIPELINE-for-VM-.patch index 741ced6..e8eff5c 100644 --- a/0005-drm-amdgpu-use-DRM_SCHED_FENCE_DONT_PIPELINE-for-VM-.patch +++ b/0004-drm-amdgpu-use-DRM_SCHED_FENCE_DONT_PIPELINE-for-VM-.patch @@ -1,7 +1,7 @@ -From 325e244c9f25829d8467ecf4d427ddcaab06b261 Mon Sep 17 00:00:00 2001 +From 80d7a84de8dde6b960af432751bde998b70acc98 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Christian=20K=C3=B6nig?= <christian.koenig@amd.com> Date: Fri, 7 Oct 2022 10:59:58 +0200 -Subject: [PATCH 5/6] drm/amdgpu: use DRM_SCHED_FENCE_DONT_PIPELINE for VM +Subject: [PATCH 4/6] drm/amdgpu: use DRM_SCHED_FENCE_DONT_PIPELINE for VM updates MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 diff --git a/0005-drm-i915-improve-the-catch-all-evict-to-handle-lock-.patch b/0005-drm-i915-improve-the-catch-all-evict-to-handle-lock-.patch new file mode 100644 index 0000000..fa7781c --- /dev/null +++ b/0005-drm-i915-improve-the-catch-all-evict-to-handle-lock-.patch @@ -0,0 +1,239 @@ +From 47e6d679cc4bab574bf32da863afafca4aad11b0 Mon Sep 17 00:00:00 2001 +From: Matthew Auld <matthew.auld@intel.com> +Date: Thu, 1 Dec 2022 15:25:22 +0000 +Subject: [PATCH 5/6] drm/i915: improve the catch-all evict to handle lock + contention + +The catch-all evict can fail due to object lock contention, since it +only goes as far as trylocking the object, due to us already holding the +vm->mutex. Doing a full object lock can deadlock the system, since the +vm->mutex is always our inner lock. Add another execbuf pass which drops +the vm->mutex and then tries to grab the object will the full lock, +before then retrying the eviction. + +Testcase: igt@igem_ppgtt@shrink-vs-evict-* +References: https://gitlab.freedesktop.org/drm/intel/-/issues/7570 +Signed-off-by: Matthew Auld <matthew.auld@intel.com> + +Revision 4 of https://patchwork.freedesktop.org/series/111271/ +--- + .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 25 +++++++++++-- + drivers/gpu/drm/i915/gem/i915_gem_mman.c | 2 +- + drivers/gpu/drm/i915/i915_gem_evict.c | 37 ++++++++++++++----- + drivers/gpu/drm/i915/i915_gem_evict.h | 4 +- + drivers/gpu/drm/i915/i915_vma.c | 2 +- + .../gpu/drm/i915/selftests/i915_gem_evict.c | 4 +- + 6 files changed, 56 insertions(+), 18 deletions(-) + +diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +index cd75b0ca2555..885fe8855718 100644 +--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c ++++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +@@ -741,25 +741,44 @@ static int eb_reserve(struct i915_execbuffer *eb) + * + * Defragmenting is skipped if all objects are pinned at a fixed location. + */ +- for (pass = 0; pass <= 2; pass++) { ++ for (pass = 0; pass <= 3; pass++) { + int pin_flags = PIN_USER | PIN_VALIDATE; + + if (pass == 0) + pin_flags |= PIN_NONBLOCK; + + if (pass >= 1) +- unpinned = eb_unbind(eb, pass == 2); ++ unpinned = eb_unbind(eb, pass >= 2); + + if (pass == 2) { + err = mutex_lock_interruptible(&eb->context->vm->mutex); + if (!err) { +- err = i915_gem_evict_vm(eb->context->vm, &eb->ww); ++ err = i915_gem_evict_vm(eb->context->vm, &eb->ww, NULL); + mutex_unlock(&eb->context->vm->mutex); + } + if (err) + return err; + } + ++ if (pass == 3) { ++retry: ++ err = mutex_lock_interruptible(&eb->context->vm->mutex); ++ if (!err) { ++ struct drm_i915_gem_object *busy_bo = NULL; ++ ++ err = i915_gem_evict_vm(eb->context->vm, &eb->ww, &busy_bo); ++ mutex_unlock(&eb->context->vm->mutex); ++ if (err && busy_bo) { ++ err = i915_gem_object_lock(busy_bo, &eb->ww); ++ i915_gem_object_put(busy_bo); ++ if (!err) ++ goto retry; ++ } ++ } ++ if (err) ++ return err; ++ } ++ + list_for_each_entry(ev, &eb->unbound, bind_link) { + err = eb_reserve_vma(eb, ev, pin_flags); + if (err) +diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c +index 0c5c43852e24..6f579cb8f2ff 100644 +--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c ++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c +@@ -369,7 +369,7 @@ static vm_fault_t vm_fault_gtt(struct vm_fault *vmf) + if (vma == ERR_PTR(-ENOSPC)) { + ret = mutex_lock_interruptible(&ggtt->vm.mutex); + if (!ret) { +- ret = i915_gem_evict_vm(&ggtt->vm, &ww); ++ ret = i915_gem_evict_vm(&ggtt->vm, &ww, NULL); + mutex_unlock(&ggtt->vm.mutex); + } + if (ret) +diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c +index f025ee4fa526..a4b4d9b7d26c 100644 +--- a/drivers/gpu/drm/i915/i915_gem_evict.c ++++ b/drivers/gpu/drm/i915/i915_gem_evict.c +@@ -416,6 +416,11 @@ int i915_gem_evict_for_node(struct i915_address_space *vm, + * @vm: Address space to cleanse + * @ww: An optional struct i915_gem_ww_ctx. If not NULL, i915_gem_evict_vm + * will be able to evict vma's locked by the ww as well. ++ * @busy_bo: Optional pointer to struct drm_i915_gem_object. If not NULL, then ++ * in the event i915_gem_evict_vm() is unable to trylock an object for eviction, ++ * then @busy_bo will point to it. -EBUSY is also returned. The caller must drop ++ * the vm->mutex, before trying again to acquire the contended lock. The caller ++ * also owns a reference to the object. + * + * This function evicts all vmas from a vm. + * +@@ -425,7 +430,8 @@ int i915_gem_evict_for_node(struct i915_address_space *vm, + * To clarify: This is for freeing up virtual address space, not for freeing + * memory in e.g. the shrinker. + */ +-int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww) ++int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww, ++ struct drm_i915_gem_object **busy_bo) + { + int ret = 0; + +@@ -457,15 +463,22 @@ int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww) + * the resv is shared among multiple objects, we still + * need the object ref. + */ +- if (dying_vma(vma) || ++ if (!i915_gem_object_get_rcu(vma->obj) || + (ww && (dma_resv_locking_ctx(vma->obj->base.resv) == &ww->ctx))) { + __i915_vma_pin(vma); + list_add(&vma->evict_link, &locked_eviction_list); + continue; + } + +- if (!i915_gem_object_trylock(vma->obj, ww)) ++ if (!i915_gem_object_trylock(vma->obj, ww)) { ++ if (busy_bo) { ++ *busy_bo = vma->obj; /* holds ref */ ++ ret = -EBUSY; ++ break; ++ } ++ i915_gem_object_put(vma->obj); + continue; ++ } + + __i915_vma_pin(vma); + list_add(&vma->evict_link, &eviction_list); +@@ -473,25 +486,29 @@ int i915_gem_evict_vm(struct i915_address_space *vm, struct i915_gem_ww_ctx *ww) + if (list_empty(&eviction_list) && list_empty(&locked_eviction_list)) + break; + +- ret = 0; + /* Unbind locked objects first, before unlocking the eviction_list */ + list_for_each_entry_safe(vma, vn, &locked_eviction_list, evict_link) { + __i915_vma_unpin(vma); + +- if (ret == 0) ++ if (ret == 0) { + ret = __i915_vma_unbind(vma); +- if (ret != -EINTR) /* "Get me out of here!" */ +- ret = 0; ++ if (ret != -EINTR) /* "Get me out of here!" */ ++ ret = 0; ++ } ++ if (!dying_vma(vma)) ++ i915_gem_object_put(vma->obj); + } + + list_for_each_entry_safe(vma, vn, &eviction_list, evict_link) { + __i915_vma_unpin(vma); +- if (ret == 0) ++ if (ret == 0) { + ret = __i915_vma_unbind(vma); +- if (ret != -EINTR) /* "Get me out of here!" */ +- ret = 0; ++ if (ret != -EINTR) /* "Get me out of here!" */ ++ ret = 0; ++ } + + i915_gem_object_unlock(vma->obj); ++ i915_gem_object_put(vma->obj); + } + } while (ret == 0); + +diff --git a/drivers/gpu/drm/i915/i915_gem_evict.h b/drivers/gpu/drm/i915/i915_gem_evict.h +index e593c530f9bd..bf0ee0e4fe60 100644 +--- a/drivers/gpu/drm/i915/i915_gem_evict.h ++++ b/drivers/gpu/drm/i915/i915_gem_evict.h +@@ -11,6 +11,7 @@ + struct drm_mm_node; + struct i915_address_space; + struct i915_gem_ww_ctx; ++struct drm_i915_gem_object; + + int __must_check i915_gem_evict_something(struct i915_address_space *vm, + struct i915_gem_ww_ctx *ww, +@@ -23,6 +24,7 @@ int __must_check i915_gem_evict_for_node(struct i915_address_space *vm, + struct drm_mm_node *node, + unsigned int flags); + int i915_gem_evict_vm(struct i915_address_space *vm, +- struct i915_gem_ww_ctx *ww); ++ struct i915_gem_ww_ctx *ww, ++ struct drm_i915_gem_object **busy_bo); + + #endif /* __I915_GEM_EVICT_H__ */ +diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c +index 373582cfd8f3..240b7b8ed281 100644 +--- a/drivers/gpu/drm/i915/i915_vma.c ++++ b/drivers/gpu/drm/i915/i915_vma.c +@@ -1569,7 +1569,7 @@ static int __i915_ggtt_pin(struct i915_vma *vma, struct i915_gem_ww_ctx *ww, + * locked objects when called from execbuf when pinning + * is removed. This would probably regress badly. + */ +- i915_gem_evict_vm(vm, NULL); ++ i915_gem_evict_vm(vm, NULL, NULL); + mutex_unlock(&vm->mutex); + } + } while (1); +diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c +index 8c6517d29b8e..37068542aafe 100644 +--- a/drivers/gpu/drm/i915/selftests/i915_gem_evict.c ++++ b/drivers/gpu/drm/i915/selftests/i915_gem_evict.c +@@ -344,7 +344,7 @@ static int igt_evict_vm(void *arg) + + /* Everything is pinned, nothing should happen */ + mutex_lock(&ggtt->vm.mutex); +- err = i915_gem_evict_vm(&ggtt->vm, NULL); ++ err = i915_gem_evict_vm(&ggtt->vm, NULL, NULL); + mutex_unlock(&ggtt->vm.mutex); + if (err) { + pr_err("i915_gem_evict_vm on a full GGTT returned err=%d]\n", +@@ -356,7 +356,7 @@ static int igt_evict_vm(void *arg) + + for_i915_gem_ww(&ww, err, false) { + mutex_lock(&ggtt->vm.mutex); +- err = i915_gem_evict_vm(&ggtt->vm, &ww); ++ err = i915_gem_evict_vm(&ggtt->vm, &ww, NULL); + mutex_unlock(&ggtt->vm.mutex); + } + +-- +2.38.1 + @@ -19,7 +19,7 @@ _custom=0 pkgbase=linux _supver=6 _majver=0 -_minver=9 +_minver=11 _gccpatchver='20221104' _gccpatchker='5.17+' if [ "$_minver" == "0" ]; then @@ -43,10 +43,10 @@ source=( https://www.kernel.org/pub/linux/kernel/v${_supver}.x/${_srcname}.tar.{xz,sign} config # the main kernel config file 0001-ZEN-Add-sysctl-and-CONFIG-to-disallow-unprivileged-C.patch - 0002-mm-vmscan-fix-extreme-overreclaim-and-swap-floods.patch - 0003-soundwire-intel-Initialize-clock-stop-timeout.patch - 0004-drm-sched-add-DRM_SCHED_FENCE_DONT_PIPELINE-flag.patch - 0005-drm-amdgpu-use-DRM_SCHED_FENCE_DONT_PIPELINE-for-VM-.patch + 0002-soundwire-intel-Initialize-clock-stop-timeout.patch + 0003-drm-sched-add-DRM_SCHED_FENCE_DONT_PIPELINE-flag.patch + 0004-drm-amdgpu-use-DRM_SCHED_FENCE_DONT_PIPELINE-for-VM-.patch + 0005-drm-i915-improve-the-catch-all-evict-to-handle-lock-.patch kernel_compiler_patch-${_gccpatchver}.tar.gz::https://github.com/graysky2/kernel_compiler_patch/archive/${_gccpatchver}.tar.gz ath9k-regdom-hack.patch raid6-default-algo.patch @@ -56,25 +56,25 @@ validpgpkeys=( '647F28654894E3BD457199BE38DBBDC86092693E' # Greg Kroah-Hartman ) # https://www.kernel.org/pub/linux/kernel/v5.x/sha256sums.asc -sha256sums=('6114a208e82739b4a1ab059ace35262be2a83be34cd1ae23cb8a09337db831c7' +sha256sums=('2bae6131e64971e1e34ff395fa542971134c857bdb0b29069ab847c7c9a9c762' 'SKIP' - '6cae8edbfa8793f8f439b625eddac6c229b2703bb1ecb834da6339fb2b96cfbb' - 'e63fd774a7aeb34c9fe95ee086f6fdbf5303863d71115270d1015f25368be55e' - '597da698df84ef920286620526f43efd66036a4aef7bb937762bddce155aa3df' - 'c79d226fa657ef74e4fc5e707129f9ec9cba7899ed751894d5daa9bcec70b0fa' - '475d817d60f41d1497504692e2eb9d0682a89bc78a0362a178960dea3cb42bac' - '8a0f3787b545db8ba18aa6ec8850e96acc8a628fb6ce4d3d7707214a5a95cd19' + 'e490737c0007da7bf9275e4d5f0162b64bb27d31169c9a24c2258c56c76fa43f' + '2f4d03a8bb21357f88d694b62fc3299944fa1738652dfe888ac0320d5d21f351' + '2fe671aab9f164a3841e75d837058f208ee781854e6c41e829ddde789bfdc4c2' + '671c3852d1adf7095cf82fdecf197c65df4d3003c917b56cee2fc9845cd06883' + '3d00e39c53c107c87925eaeade32fc7d78e916e588ab5d8e4dd84c33ae748a96' + '753576c6bc05bab969c5824fdb8dd8e6e1131d4c7f805dbaf5c529aafd2a1b6b' '3a8f397b89bad95c46f42c0f80ede7536a4a45a28621e00ed486918a55f905ed' 'e9e0d289170b7fb598b572d9c892ae8d1420952034aa415e8b3334f20a58edcc' '6ab863c8cfe6e0dd53a9d6455872fd4391508a8b97ab92e3f13558a6617b12a6') -b2sums=('23ad9036cc771135c4f6bcb17950ec61e182981c4bff596062aa92cfccc66c316b35598e7a162ca4c346ca6b18796e2c5fd8112b05544fdde062faa0c3a82305' +b2sums=('55d3fc789bd0775afc3415c3cdbdf0819f1cce599fa383a44fd62eb80734bc204ef9848b35ae7746b6fb74db58bff106639ed2ac971b12b60f0ade3fae96f404' 'SKIP' - '45e471912f742b68bd445a2d7d9f4026147c8f315712831e12096bf98f8777404acaada639b6ffd52efbe14ec2661dc8b72464990d8c6efc83c2f4f91a01850c' - '991fb0113e0ac1b69fefa3a2e34b85a22c8c16217c320f03860ab73e677781cb1769ffd6fd4122bb43d3f05269db88b347a93b7424cfdf654fb6e64e631e9e7e' - '1bd4c5a68a98ab86b951f3aeb91cbd0ea65ad0513f6c8462c9bcca08f9515dab49a6580f9ad31716522793f757e2f2f016d63b64580013014618fadd447c48c1' - '91510880816cac70e6d29ce2d9a38993d0df25b2317760b89839d96dc061fb2f9dfcbb9473fe7bc254c5d939e1a6da0f39c540e5658b92dd954fda3080a9358a' - '7a9753747d9447289e3531a8791d6b297c71cfdd9da0a738069ff14910230c036e303e3031690914f73bcadfec1db5f2f25bbdb1af8a85b59cf269fd231ef1fe' - '4ad868a8f4d5a0d0f32c36291e37a21ec9b53f7ea424afbdafd5934cdab9e3553eefa789ede55c5cb3081f37bcef742f30465b85c686e10c442257602e59c54f' + '02755ec326bde2703f0019d9fa214f6a7db52af43d47dde9ba87cd95bdc40fc4cc47f678a677640e4a472ec46790ba31ac36c37ded650740a1ef4f23b3f5e92b' + '9d9dd78748f901b9be5876e1b34f4f13cae51384fbbfa653678924c6c8d90fdddf71b8c524aa2bc24548c6111adf160153753bd8a738866a2473459a338df6a0' + 'e2b8f09db5299046f4b37caeef16bb26430432a63fa58c7055eaeec58c70e1e12cfe712b37e54325b6273c317294b654a3d9b16ce4b0862b9ae679d766d08245' + 'd7b384645d39201c43871e3c6067012b8bb2b30bcce3217dbfdd6cc9437e1cf1ab7dbbbcd333cc5b9e04a986a6d1aefa9659cab73fdd308878621104285138b8' + 'bc9d55712dbcb69a6220ffa98da46ac8ed78c2901e74aa65d4a5f35796a7cc17f2bd87d9d62924c18266f703bea67debf120d5ceff7a21963d1b35e022391252' + 'c819f6054da048c136d058e2925dcce4fab694917bd0608aee7f616a74c2464176b9c842764af49589bcbf298bb267528c7607f81137415cfccde2fa51b4beea' '05bddc2b57189d7e302f32041079bcf60a06938e9afcfd02a0085a1430255286a5b663eed7cdb7f857717c65e9e27af4d15625b17e0a36d1b4ce1cbda6baee2b' 'b6ef77035611139fa9a6d5b8d30570e2781bb4da483bb569884b0bd0129b62e0b82a5a6776fefe43fee801c70d39de1ea4d4c177f7cedd5ac135e3c64f7b895a' 'e94aa35d92cec92f4b0d487e0569790f3b712b9eaa5107f14a4200578e398ca740bf369f30f070c8beb56a72d1a6d0fc06beb650d798a64f44abe5e3af327728') @@ -97,10 +97,10 @@ prepare() { # Hotfixes echo "Applying hotfixes" patch -p1 -i ../0001-ZEN-Add-sysctl-and-CONFIG-to-disallow-unprivileged-C.patch - patch -p1 -i ../0002-mm-vmscan-fix-extreme-overreclaim-and-swap-floods.patch - patch -p1 -i ../0003-soundwire-intel-Initialize-clock-stop-timeout.patch - patch -p1 -i ../0004-drm-sched-add-DRM_SCHED_FENCE_DONT_PIPELINE-flag.patch - patch -p1 -i ../0005-drm-amdgpu-use-DRM_SCHED_FENCE_DONT_PIPELINE-for-VM-.patch + patch -p1 -i ../0002-soundwire-intel-Initialize-clock-stop-timeout.patch + patch -p1 -i ../0003-drm-sched-add-DRM_SCHED_FENCE_DONT_PIPELINE-flag.patch + patch -p1 -i ../0004-drm-amdgpu-use-DRM_SCHED_FENCE_DONT_PIPELINE-for-VM-.patch + patch -p1 -i ../0005-drm-i915-improve-the-catch-all-evict-to-handle-lock-.patch # graysky gcc patch echo "Applying graysky gcc patch" @@ -701,7 +701,7 @@ CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y # CONFIG_X86_INTEL_PSTATE=y CONFIG_X86_PCC_CPUFREQ=m -CONFIG_X86_AMD_PSTATE=m +CONFIG_X86_AMD_PSTATE=y CONFIG_X86_ACPI_CPUFREQ=m CONFIG_X86_ACPI_CPUFREQ_CPB=y CONFIG_X86_POWERNOW_K8=m @@ -1252,6 +1252,7 @@ CONFIG_INET_ESP=m CONFIG_INET_ESP_OFFLOAD=m CONFIG_INET_ESPINTCP=y CONFIG_INET_IPCOMP=m +CONFIG_INET_TABLE_PERTURB_ORDER=16 CONFIG_INET_XFRM_TUNNEL=m CONFIG_INET_TUNNEL=m CONFIG_INET_DIAG=m |