aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* iommu/fsl_pamu: merge iommu_alloc_dma_domain into fsl_pamu_domain_allocChristoph Hellwig2021-04-071-24/+10
| | | | | | | | | | Keep the functionality to allocate the domain together. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Li Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-5-hch@lst.de Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/fsl_pamu: remove support for setting DOMAIN_ATTR_GEOMETRYChristoph Hellwig2021-04-073-68/+5
| | | | | | | | | | | | | | The default geometry is the same as the one set by qman_port given that FSL_PAMU depends on having 64-bit physical and thus DMA addresses. Remove the support to update the geometry and remove the now pointless geom_size field. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Li Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-4-hch@lst.de Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/fsl_pamu: remove fsl_pamu_get_domain_attrChristoph Hellwig2021-04-072-34/+0
| | | | | | | | | | | None of the values returned by this function are ever queried. Also remove the DOMAIN_ATTR_FSL_PAMUV1 enum value that is not otherwise used. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Li Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-3-hch@lst.de Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu: remove the unused domain_window_disable methodChristoph Hellwig2021-04-072-50/+0
| | | | | | | | | | domain_window_disable is wired up by fsl_pamu, but never actually called. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Will Deacon <will@kernel.org> Acked-by: Li Yang <leoyang.li@nxp.com> Link: https://lore.kernel.org/r/20210401155256.298656-2-hch@lst.de Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/arm-smmu-v3: Maintain a SID->device structureJean-Philippe Brucker2021-04-072-30/+141
| | | | | | | | | | | | | | When handling faults from the event or PRI queue, we need to find the struct device associated with a SID. Add a rb_tree to keep track of SIDs. Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Keqian Zhu <zhukeqian1@huawei.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210401154718.307519-8-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu: Add a page fault handlerJean-Philippe Brucker2021-04-074-0/+517
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some systems allow devices to handle I/O Page Faults in the core mm. For example systems implementing the PCIe PRI extension or Arm SMMU stall model. Infrastructure for reporting these recoverable page faults was added to the IOMMU core by commit 0c830e6b3282 ("iommu: Introduce device fault report API"). Add a page fault handler for host SVA. IOMMU driver can now instantiate several fault workqueues and link them to IOPF-capable devices. Drivers can choose between a single global workqueue, one per IOMMU device, one per low-level fault queue, one per domain, etc. When it receives a fault event, most commonly in an IRQ handler, the IOMMU driver reports the fault using iommu_report_device_fault(), which calls the registered handler. The page fault handler then calls the mm fault handler, and reports either success or failure with iommu_page_response(). After the handler succeeds, the hardware retries the access. The iopf_param pointer could be embedded into iommu_fault_param. But putting iopf_param into the iommu_param structure allows us not to care about ordering between calls to iopf_queue_add_device() and iommu_register_device_fault_handler(). Tested-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Jacob Pan <jacob.jun.pan@linux.intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-7-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
* uacce: Enable IOMMU_DEV_FEAT_IOPFJean-Philippe Brucker2021-04-071-9/+30
| | | | | | | | | | | | | | | | The IOPF (I/O Page Fault) feature is now enabled independently from the SVA feature, because some IOPF implementations are device-specific and do not require IOMMU support for PCIe PRI or Arm SMMU stall. Enable IOPF unconditionally when enabling SVA for now. In the future, if a device driver implementing a uacce interface doesn't need IOPF support, it will need to tell the uacce module, for example with a new flag. Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-6-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/vt-d: Support IOMMU_DEV_FEAT_IOPFJean-Philippe Brucker2021-04-071-3/+8
| | | | | | | | | | Allow drivers to query and enable IOMMU_DEV_FEAT_IOPF, which amounts to checking whether PRI is enabled. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-5-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu: Separate IOMMU_DEV_FEAT_IOPF from IOMMU_DEV_FEAT_SVAJean-Philippe Brucker2021-04-071-3/+17
| | | | | | | | | | | | | | | Some devices manage I/O Page Faults (IOPF) themselves instead of relying on PCIe PRI or Arm SMMU stall. Allow their drivers to enable SVA without mandating IOMMU-managed IOPF. The other device drivers now need to first enable IOMMU_DEV_FEAT_IOPF before enabling IOMMU_DEV_FEAT_SVA. Enabling IOMMU_DEV_FEAT_IOPF on its own doesn't have any effect visible to the device driver, it is used in combination with other features. Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-4-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/arm-smmu-v3: Use device properties for pasid-num-bitsJean-Philippe Brucker2021-04-074-14/+9
| | | | | | | | | | | | | | | The pasid-num-bits property shouldn't need a dedicated fwspec field, it's a job for device properties. Add properties for IORT, and access the number of PASID bits using device_property_read_u32(). Suggested-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Will Deacon <will@kernel.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Acked-by: Hanjun Guo <guohanjun@huawei.com> Link: https://lore.kernel.org/r/20210401154718.307519-3-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu: Fix comment for struct iommu_fwspecJean-Philippe Brucker2021-04-071-1/+1
| | | | | | | | | | | | | Commit 986d5ecc5699 ("iommu: Move fwspec->iommu_priv to struct dev_iommu") removed iommu_priv from fwspec and commit 5702ee24182f ("ACPI/IORT: Check ATS capability in root complex nodes") added @flags. Update the struct doc. Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20210401154718.307519-2-jean-philippe@linaro.org Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu: Stop exporting free_iova_fast()John Garry2021-04-071-1/+0
| | | | | | | | | | | | Function free_iova_fast() is only referenced by dma-iommu.c, which can only be in-built, so stop exporting it. This was missed in an earlier tidy-up patch. Signed-off-by: John Garry <john.garry@huawei.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/1616675401-151997-5-git-send-email-john.garry@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu: Delete iommu_dma_free_cpu_cached_iovas()John Garry2021-04-074-20/+2
| | | | | | | | | | | | Function iommu_dma_free_cpu_cached_iovas() no longer has any caller, so delete it. With that, function free_cpu_cached_iovas() may be made static. Signed-off-by: John Garry <john.garry@huawei.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/1616675401-151997-4-git-send-email-john.garry@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/vt-d: Remove IOVA domain rcache flushing for CPU offliningJohn Garry2021-04-072-32/+0
| | | | | | | | | | Now that the core code handles flushing per-IOVA domain CPU rcaches, remove the handling here. Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1616675401-151997-3-git-send-email-john.garry@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iova: Add CPU hotplug handler to flush rcachesJohn Garry2021-04-073-1/+31
| | | | | | | | | | Like the Intel IOMMU driver already does, flush the per-IOVA domain CPU rcache when a CPU goes offline - there's no point in keeping it. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1616675401-151997-2-git-send-email-john.garry@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/iova: Improve restart logicRobin Murphy2021-03-181-1/+38
| | | | | | | | | | | | | | | When restarting after searching below the cached node fails, resetting the start point to the anchor node is often overly pessimistic. If allocations are made with mixed limits - particularly in the case of the opportunistic 32-bit allocation for PCI devices - this could mean significant time wasted walking through the whole populated upper range just to reach the initial limit. We can improve on that by implementing a proper tree traversal to find the first node above the relevant limit, and set the exact start point. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/076b3484d1e5057b95d8c387c894bd6ad2514043.1614962123.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/iova: Add rbtree entry helperRobin Murphy2021-03-181-9/+14
| | | | | | | | | Repeating the rb_entry() boilerplate all over the place gets old fast. Before adding yet more instances, add a little hepler to tidy it up. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/03931d86c0ad71f44b29394e3a8d38bfc32349cd.1614962123.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/dma: Resurrect the "forcedac" optionRobin Murphy2021-03-184-11/+24
| | | | | | | | | | | | | | | | | | In converting intel-iommu over to the common IOMMU DMA ops, it quietly lost the functionality of its "forcedac" option. Since this is a handy thing both for testing and for performance optimisation on certain platforms, reimplement it under the common IOMMU parameter namespace. For the sake of fixing the inadvertent breakage of the Intel-specific parameter, remove the dmar_forcedac remnants and hook it up as an alias while documenting the transition to the new common parameter. Fixes: c588072bba6b ("iommu/vt-d: Convert intel iommu driver to the iommu ops") Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/7eece8e0ea7bfbe2cd0e30789e0d46df573af9b0.1614961776.git.robin.murphy@arm.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu: Check dev->iommu in iommu_dev_xxx functionsShameer Kolothum2021-03-181-9/+15
| | | | | | | | | | | | | | The device iommu probe/attach might have failed leaving dev->iommu to NULL and device drivers may still invoke these functions resulting in a crash in iommu vendor driver code. Hence make sure we check that. Fixes: a3a195929d40 ("iommu: Add APIs for multiple domains per device") Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20210303173611.520-1-shameerali.kolothum.thodi@huawei.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
* Linux 5.12-rc3Linus Torvalds2021-03-141-1/+1
|
* prctl: fix PR_SET_MM_AUXV kernel stack leakAlexey Dobriyan2021-03-141-1/+1
| | | | | | | | | | | | | | | | | | Doing a prctl(PR_SET_MM, PR_SET_MM_AUXV, addr, 1); will copy 1 byte from userspace to (quite big) on-stack array and then stash everything to mm->saved_auxv. AT_NULL terminator will be inserted at the very end. /proc/*/auxv handler will find that AT_NULL terminator and copy original stack contents to userspace. This devious scheme requires CAP_SYS_RESOURCE. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge tag 'irq-urgent-2021-03-14' of ↵Linus Torvalds2021-03-147-18/+8
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull irq fixes from Thomas Gleixner: "A set of irqchip updates: - Make the GENERIC_IRQ_MULTI_HANDLER configuration correct - Add a missing DT compatible string for the Ingenic driver - Remove the pointless debugfs_file pointer from struct irqdomain" * tag 'irq-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip/ingenic: Add support for the JZ4760 dt-bindings/irq: Add compatible string for the JZ4760B irqchip: Do not blindly select CONFIG_GENERIC_IRQ_MULTI_HANDLER ARM: ep93xx: Select GENERIC_IRQ_MULTI_HANDLER directly irqdomain: Remove debugfs_file from struct irq_domain
| * Merge tag 'irqchip-fixes-5.12-1' of ↵Thomas Gleixner2021-03-1411444-286202/+488784
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/urgent Pull irqchip fixes from Marc Zyngier: - More compatible strings for the Ingenic irqchip (introducing the JZ4760B SoC) - Select GENERIC_IRQ_MULTI_HANDLER on the ARM ep93xx platform - Drop all GENERIC_IRQ_MULTI_HANDLER selections from the irqchip Kconfig, now relying on the architecture to get it right - Drop the debugfs_file field from struct irq_domain, now that debugfs can track things on its own
| | * irqchip/ingenic: Add support for the JZ4760Paul Cercueil2021-03-092-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for the interrupt controller found in the JZ4760 SoC, which works exactly like the one in the JZ4770. Signed-off-by: Paul Cercueil <paul@crapouillou.net> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210307172014.73481-2-paul@crapouillou.net
| | * dt-bindings/irq: Add compatible string for the JZ4760BPaul Cercueil2021-03-091-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add the ingenic,jz4760b-intc compatible string with a fallback to the ingenic,jz4760-intc compatible string. Signed-off-by: Paul Cercueil <paul@crapouillou.net> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210307172014.73481-1-paul@crapouillou.net
| | * irqchip: Do not blindly select CONFIG_GENERIC_IRQ_MULTI_HANDLERMarc Zyngier2021-03-081-9/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implementing CONFIG_GENERIC_IRQ_MULTI_HANDLER is a decision that is made at the architecture level, and shouldn't involve the irqchip at all (we even provide a fallback helper when the option isn't selected). Drop all instances of such selection from non-arch code. Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210217142800.2547737-1-maz@kernel.org Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hector Martin <marcan@marcan.st> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org>
| | * ARM: ep93xx: Select GENERIC_IRQ_MULTI_HANDLER directlyMarc Zyngier2021-03-081-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ep93xx currently relies of CONFIG_ARM_VIC to select GENERIC_IRQ_MULTI_HANDLER. Given that this is logically a platform architecture property, add the selection of GENERIC_IRQ_MULTI_HANDLER at the platform level. Further patches will remove the selection from the irqchip side. Reported-by: Marc Rutland <mark.rutland@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hector Martin <marcan@marcan.st> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org>
| | * irqdomain: Remove debugfs_file from struct irq_domainGreg Kroah-Hartman2021-03-082-9/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's no need to keep around a dentry pointer to a simple file that debugfs itself can look up when we need to remove it from the system. So simplify the code by deleting the variable and cleaning up the logic around the debugfs file. Cc: Marc Zyngier <maz@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/YCvYV53ZdzQSWY6w@kroah.com
* | | Merge tag 'timers-urgent-2021-03-14' of ↵Linus Torvalds2021-03-141-21/+39
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull timer fix from Thomas Gleixner: "A single fix in for hrtimers to prevent an interrupt storm caused by the lack of reevaluation of the timers which expire in softirq context under certain circumstances, e.g. when the clock was set" * tag 'timers-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: hrtimer: Update softirq_expires_next correctly after __hrtimer_get_next_event()
| * | | hrtimer: Update softirq_expires_next correctly after __hrtimer_get_next_event()Anna-Maria Behnsen2021-03-081-21/+39
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | hrtimer_force_reprogram() and hrtimer_interrupt() invokes __hrtimer_get_next_event() to find the earliest expiry time of hrtimer bases. __hrtimer_get_next_event() does not update cpu_base::[softirq_]_expires_next to preserve reprogramming logic. That needs to be done at the callsites. hrtimer_force_reprogram() updates cpu_base::softirq_expires_next only when the first expiring timer is a softirq timer and the soft interrupt is not activated. That's wrong because cpu_base::softirq_expires_next is left stale when the first expiring timer of all bases is a timer which expires in hard interrupt context. hrtimer_interrupt() does never update cpu_base::softirq_expires_next which is wrong too. That becomes a problem when clock_settime() sets CLOCK_REALTIME forward and the first soft expiring timer is in the CLOCK_REALTIME_SOFT base. Setting CLOCK_REALTIME forward moves the clock MONOTONIC based expiry time of that timer before the stale cpu_base::softirq_expires_next. cpu_base::softirq_expires_next is cached to make the check for raising the soft interrupt fast. In the above case the soft interrupt won't be raised until clock monotonic reaches the stale cpu_base::softirq_expires_next value. That's incorrect, but what's worse it that if the softirq timer becomes the first expiring timer of all clock bases after the hard expiry timer has been handled the reprogramming of the clockevent from hrtimer_interrupt() will result in an interrupt storm. That happens because the reprogramming does not use cpu_base::softirq_expires_next, it uses __hrtimer_get_next_event() which returns the actual expiry time. Once clock MONOTONIC reaches cpu_base::softirq_expires_next the soft interrupt is raised and the storm subsides. Change the logic in hrtimer_force_reprogram() to evaluate the soft and hard bases seperately, update softirq_expires_next and handle the case when a soft expiring timer is the first of all bases by comparing the expiry times and updating the required cpu base fields. Split this functionality into a separate function to be able to use it in hrtimer_interrupt() as well without copy paste. Fixes: 5da70160462e ("hrtimer: Implement support for softirq based hrtimers") Reported-by: Mikael Beckius <mikael.beckius@windriver.com> Suggested-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Mikael Beckius <mikael.beckius@windriver.com> Signed-off-by: Anna-Maria Behnsen <anna-maria@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210223160240.27518-1-anna-maria@linutronix.de
* | | Merge tag 'sched-urgent-2021-03-14' of ↵Linus Torvalds2021-03-142-67/+63
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull scheduler fixes from Thomas Gleixner: "A set of scheduler updates: - Prevent a NULL pointer dereference in the migration_stop_cpu() mechanims - Prevent self concurrency of affine_move_task() - Small fixes and cleanups related to task migration/affinity setting - Ensure that sync_runqueues_membarrier_state() is invoked on the current CPU when it is in the cpu mask" * tag 'sched-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/membarrier: fix missing local execution of ipi_sync_rq_state() sched: Simplify set_affinity_pending refcounts sched: Fix affine_move_task() self-concurrency sched: Optimize migration_cpu_stop() sched: Collate affine_move_task() stoppers sched: Simplify migration_cpu_stop() sched: Fix migration_cpu_stop() requeueing
| * | | sched/membarrier: fix missing local execution of ipi_sync_rq_state()Mathieu Desnoyers2021-03-061-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The function sync_runqueues_membarrier_state() should copy the membarrier state from the @mm received as parameter to each runqueue currently running tasks using that mm. However, the use of smp_call_function_many() skips the current runqueue, which is unintended. Replace by a call to on_each_cpu_mask(). Fixes: 227a4aadc75b ("sched/membarrier: Fix p->mm->membarrier_state racy load") Reported-by: Nadav Amit <nadav.amit@gmail.com> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org # 5.4.x+ Link: https://lore.kernel.org/r/74F1E842-4A84-47BF-B6C2-5407DFDD4A4A@gmail.com
| * | | sched: Simplify set_affinity_pending refcountsPeter Zijlstra2021-03-061-12/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we have set_affinity_pending::stop_pending to indicate if a stopper is in progress, and we have the guarantee that if that stopper exists, it will (eventually) complete our @pending we can simplify the refcount scheme by no longer counting the stopper thread. Fixes: 6d337eab041d ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()") Cc: stable@kernel.org Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/20210224131355.724130207@infradead.org
| * | | sched: Fix affine_move_task() self-concurrencyPeter Zijlstra2021-03-061-3/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Consider: sched_setaffinity(p, X); sched_setaffinity(p, Y); Then the first will install p->migration_pending = &my_pending; and issue stop_one_cpu_nowait(pending); and the second one will read p->migration_pending and _also_ issue: stop_one_cpu_nowait(pending), the _SAME_ @pending. This causes stopper list corruption. Add set_affinity_pending::stop_pending, to indicate if a stopper is in progress. Fixes: 6d337eab041d ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()") Cc: stable@kernel.org Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/20210224131355.649146419@infradead.org
| * | | sched: Optimize migration_cpu_stop()Peter Zijlstra2021-03-061-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the purpose of migration_cpu_stop() is to migrate the task to 'any' valid CPU, don't migrate the task when it's already running on a valid CPU. Fixes: 6d337eab041d ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()") Cc: stable@kernel.org Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/20210224131355.569238629@infradead.org
| * | | sched: Collate affine_move_task() stoppersPeter Zijlstra2021-03-061-15/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The SCA_MIGRATE_ENABLE and task_running() cases are almost identical, collapse them to avoid further duplication. Fixes: 6d337eab041d ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()") Cc: stable@kernel.org Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/20210224131355.500108964@infradead.org
| * | | sched: Simplify migration_cpu_stop()Peter Zijlstra2021-03-061-48/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When affine_move_task() issues a migration_cpu_stop(), the purpose of that function is to complete that @pending, not any random other p->migration_pending that might have gotten installed since. This realization much simplifies migration_cpu_stop() and allows further necessary steps to fix all this as it provides the guarantee that @pending's stopper will complete @pending (and not some random other @pending). Fixes: 6d337eab041d ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()") Cc: stable@kernel.org Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/20210224131355.430014682@infradead.org
| * | | sched: Fix migration_cpu_stop() requeueingPeter Zijlstra2021-03-061-11/+28
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When affine_move_task(p) is called on a running task @p, which is not otherwise already changing affinity, we'll first set p->migration_pending and then do: stop_one_cpu(cpu_of_rq(rq), migration_cpu_stop, &arg); This then gets us to migration_cpu_stop() running on the CPU that was previously running our victim task @p. If we find that our task is no longer on that runqueue (this can happen because of a concurrent migration due to load-balance etc.), then we'll end up at the: } else if (dest_cpu < 1 || pending) { branch. Which we'll take because we set pending earlier. Here we first check if the task @p has already satisfied the affinity constraints, if so we bail early [A]. Otherwise we'll reissue migration_cpu_stop() onto the CPU that is now hosting our task @p: stop_one_cpu_nowait(cpu_of(rq), migration_cpu_stop, &pending->arg, &pending->stop_work); Except, we've never initialized pending->arg, which will be all 0s. This then results in running migration_cpu_stop() on the next CPU with arg->p == NULL, which gives the by now obvious result of fireworks. The cure is to change affine_move_task() to always use pending->arg, furthermore we can use the exact same pattern as the SCA_MIGRATE_ENABLE case, since we'll block on the pending->done completion anyway, no point in adding yet another completion in stop_one_cpu(). This then gives a clear distinction between the two migration_cpu_stop() use cases: - sched_exec() / migrate_task_to() : arg->pending == NULL - affine_move_task() : arg->pending != NULL; And we can have it ignore p->migration_pending when !arg->pending. Any stop work from sched_exec() / migrate_task_to() is in addition to stop works from affine_move_task(), which will be sufficient to issue the completion. Fixes: 6d337eab041d ("sched: Fix migrate_disable() vs set_cpus_allowed_ptr()") Cc: stable@kernel.org Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com> Link: https://lkml.kernel.org/r/20210224131355.357743989@infradead.org
* | | Merge tag 'objtool-urgent-2021-03-14' of ↵Linus Torvalds2021-03-142-6/+7
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull objtool fix from Thomas Gleixner: "A single objtool fix to handle the PUSHF/POPF validation correctly for the paravirt changes which modified arch_local_irq_restore not to use popf" * tag 'objtool-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: objtool,x86: Fix uaccess PUSHF/POPF validation
| * | | objtool,x86: Fix uaccess PUSHF/POPF validationPeter Zijlstra2021-03-122-6/+7
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit ab234a260b1f ("x86/pv: Rework arch_local_irq_restore() to not use popf") replaced "push %reg; popf" with something like: "test $0x200, %reg; jz 1f; sti; 1:", which breaks the pushf/popf symmetry that commit ea24213d8088 ("objtool: Add UACCESS validation") relies on. The result is: drivers/gpu/drm/amd/amdgpu/si.o: warning: objtool: si_common_hw_init()+0xf36: PUSHF stack exhausted Meanwhile, commit c9c324dc22aa ("objtool: Support stack layout changes in alternatives") makes that we can actually use stack-ops in alternatives, which means we can revert 1ff865e343c2 ("x86,smap: Fix smap_{save,restore}() alternatives"). That in turn means we can limit the PUSHF/POPF handling of ea24213d8088 to those instructions that are in alternatives. Fixes: ab234a260b1f ("x86/pv: Rework arch_local_irq_restore() to not use popf") Reported-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/YEY4rIbQYa5fnnEp@hirez.programming.kicks-ass.net
* | | Merge tag 'locking-urgent-2021-03-14' of ↵Linus Torvalds2021-03-143-10/+9
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking fixes from Thomas Gleixner: "A couple of locking fixes: - A fix for the static_call mechanism so it handles unaligned addresses correctly. - Make u64_stats_init() a macro so every instance gets a seperate lockdep key. - Make seqcount_latch_init() a macro as well to preserve the static variable which is used for the lockdep key" * tag 'locking-urgent-2021-03-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: seqlock,lockdep: Fix seqcount_latch_init() u64_stats,lockdep: Fix u64_stats_init() vs lockdep static_call: Fix the module key fixup
| * | | seqlock,lockdep: Fix seqcount_latch_init()Peter Zijlstra2021-03-101-4/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | seqcount_init() must be a macro in order to preserve the static variable that is used for the lockdep key. Don't then wrap it in an inline function, which destroys that. Luckily there aren't many users of this function, but fix it before it becomes a problem. Fixes: 80793c3471d9 ("seqlock: Introduce seqcount_latch_t") Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/YEeFEbNUVkZaXDp4@hirez.programming.kicks-ass.net
| * | | u64_stats,lockdep: Fix u64_stats_init() vs lockdepPeter Zijlstra2021-03-101-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jakub reported that: static struct net_device *rtl8139_init_board(struct pci_dev *pdev) { ... u64_stats_init(&tp->rx_stats.syncp); u64_stats_init(&tp->tx_stats.syncp); ... } results in lockdep getting confused between the RX and TX stats lock. This is because u64_stats_init() is an inline calling seqcount_init(), which is a macro using a static variable to generate a lockdep class. By wrapping that in an inline, we negate the effect of the macro and fold the static key variable, hence the confusion. Fix by also making u64_stats_init() a macro for the case where it matters, leaving the other case an inline for argument validation etc. Reported-by: Jakub Kicinski <kuba@kernel.org> Debugged-by: "Ahmed S. Darwish" <a.darwish@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: "Erhard F." <erhard_f@mailbox.org> Link: https://lkml.kernel.org/r/YEXicy6+9MksdLZh@hirez.programming.kicks-ass.net
| * | | static_call: Fix the module key fixupPeter Zijlstra2021-03-061-3/+4
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Provided the target address of a R_X86_64_PC32 relocation is aligned, the low two bits should be invariant between the relative and absolute value. Turns out the address is not aligned and things go sideways, ensure we transfer the bits in the absolute form when fixing up the key address. Fixes: 73f44fe19d35 ("static_call: Allow module use without exposing static_call_key") Reported-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Tested-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Link: https://lkml.kernel.org/r/20210225220351.GE4746@worktop.programming.kicks-ass.net
* | | Merge tag 'perf_urgent_for_v5.12-rc3' of ↵Linus Torvalds2021-03-145-15/+51
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Borislav Petkov: - Make sure PMU internal buffers are flushed for per-CPU events too and properly handle PID/TID for large PEBS. - Handle the case properly when there's no PMU and therefore return an empty list of perf MSRs for VMX to switch instead of reading random garbage from the stack. * tag 'perf_urgent_for_v5.12-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/perf: Use RET0 as default for guest_get_msrs to handle "no PMU" case perf/x86/intel: Set PERF_ATTACH_SCHED_CB for large PEBS and LBR perf/core: Flush PMU internal buffers for per-CPU events
| * | | x86/perf: Use RET0 as default for guest_get_msrs to handle "no PMU" caseSean Christopherson2021-03-102-10/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Initialize x86_pmu.guest_get_msrs to return 0/NULL to handle the "nop" case. Patching in perf_guest_get_msrs_nop() during setup does not work if there is no PMU, as setup bails before updating the static calls, leaving x86_pmu.guest_get_msrs NULL and thus a complete nop. Ultimately, this causes VMX abort on VM-Exit due to KVM putting random garbage from the stack into the MSR load list. Add a comment in KVM to note that nr_msrs is valid if and only if the return value is non-NULL. Fixes: abd562df94d1 ("x86/perf: Use static_call for x86_pmu.guest_get_msrs") Reported-by: Dmitry Vyukov <dvyukov@google.com> Reported-by: syzbot+cce9ef2dd25246f815ee@syzkaller.appspotmail.com Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210309171019.1125243-1-seanjc@google.com
| * | | perf/x86/intel: Set PERF_ATTACH_SCHED_CB for large PEBS and LBRKan Liang2021-03-061-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To supply a PID/TID for large PEBS, it requires flushing the PEBS buffer in a context switch. For normal LBRs, a context switch can flip the address space and LBR entries are not tagged with an identifier, we need to wipe the LBR, even for per-cpu events. For LBR callstack, save/restore the stack is required during a context switch. Set PERF_ATTACH_SCHED_CB for the event with large PEBS & LBR. Fixes: 9c964efa4330 ("perf/x86/intel: Drain the PEBS buffer during context switches") Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20201130193842.10569-2-kan.liang@linux.intel.com
| * | | perf/core: Flush PMU internal buffers for per-CPU eventsKan Liang2021-03-062-4/+40
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sometimes the PMU internal buffers have to be flushed for per-CPU events during a context switch, e.g., large PEBS. Otherwise, the perf tool may report samples in locations that do not belong to the process where the samples are processed in, because PEBS does not tag samples with PID/TID. The current code only flush the buffers for a per-task event. It doesn't check a per-CPU event. Add a new event state flag, PERF_ATTACH_SCHED_CB, to indicate that the PMU internal buffers have to be flushed for this event during a context switch. Add sched_cb_entry and perf_sched_cb_usages back to track the PMU/cpuctx which is required to be flushed. Only need to invoke the sched_task() for per-CPU events in this patch. The per-task events have been handled in perf_event_context_sched_in/out already. Fixes: 9c964efa4330 ("perf/x86/intel: Drain the PEBS buffer during context switches") Reported-by: Gabriel Marin <gmx@google.com> Originally-by: Namhyung Kim <namhyung@kernel.org> Signed-off-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20201130193842.10569-1-kan.liang@linux.intel.com
* | | Merge tag 'efi-urgent-for-v5.12-rc2' of ↵Linus Torvalds2021-03-141-0/+16
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull EFI fix from Ard Biesheuvel via Borislav Petkov: "Fix an oversight in the handling of EFI_RT_PROPERTIES_TABLE, which was added v5.10, but failed to take the SetVirtualAddressMap() RT service into account" * tag 'efi-urgent-for-v5.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: efi: stub: omit SetVirtualAddressMap() if marked unsupported in RT_PROP table
| * | | efi: stub: omit SetVirtualAddressMap() if marked unsupported in RT_PROP tableArd Biesheuvel2021-03-071-0/+16
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The EFI_RT_PROPERTIES_TABLE contains a mask of runtime services that are available after ExitBootServices(). This mostly does not concern the EFI stub at all, given that it runs before that. However, there is one call that is made at runtime, which is the call to SetVirtualAddressMap() (which is not even callable at boot time to begin with) So add the missing handling of the RT_PROP table to ensure that we only call SetVirtualAddressMap() if it is not being advertised as unsupported by the firmware. Cc: <stable@vger.kernel.org> # v5.10+ Tested-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>