diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2020-10-15 18:42:13 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2020-10-15 18:42:13 -0700 |
commit | 9ff9b0d392ea08090cd1780fb196f36dbb586529 (patch) | |
tree | 276a3a5c4525b84dee64eda30b423fc31bf94850 /drivers/net/ethernet/smsc | |
parent | 840e5bb326bbcb16ce82dd2416d2769de4839aea (diff) | |
parent | 105faa8742437c28815b2a3eb8314ebc5fd9288c (diff) | |
download | linux-9ff9b0d392ea08090cd1780fb196f36dbb586529.tar.gz |
Merge tag 'net-next-5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski:
- Add redirect_neigh() BPF packet redirect helper, allowing to limit
stack traversal in common container configs and improving TCP
back-pressure.
Daniel reports ~10Gbps => ~15Gbps single stream TCP performance gain.
- Expand netlink policy support and improve policy export to user
space. (Ge)netlink core performs request validation according to
declared policies. Expand the expressiveness of those policies
(min/max length and bitmasks). Allow dumping policies for particular
commands. This is used for feature discovery by user space (instead
of kernel version parsing or trial and error).
- Support IGMPv3/MLDv2 multicast listener discovery protocols in
bridge.
- Allow more than 255 IPv4 multicast interfaces.
- Add support for Type of Service (ToS) reflection in SYN/SYN-ACK
packets of TCPv6.
- In Multi-patch TCP (MPTCP) support concurrent transmission of data on
multiple subflows in a load balancing scenario. Enhance advertising
addresses via the RM_ADDR/ADD_ADDR options.
- Support SMC-Dv2 version of SMC, which enables multi-subnet
deployments.
- Allow more calls to same peer in RxRPC.
- Support two new Controller Area Network (CAN) protocols - CAN-FD and
ISO 15765-2:2016.
- Add xfrm/IPsec compat layer, solving the 32bit user space on 64bit
kernel problem.
- Add TC actions for implementing MPLS L2 VPNs.
- Improve nexthop code - e.g. handle various corner cases when nexthop
objects are removed from groups better, skip unnecessary
notifications and make it easier to offload nexthops into HW by
converting to a blocking notifier.
- Support adding and consuming TCP header options by BPF programs,
opening the doors for easy experimental and deployment-specific TCP
option use.
- Reorganize TCP congestion control (CC) initialization to simplify
life of TCP CC implemented in BPF.
- Add support for shipping BPF programs with the kernel and loading
them early on boot via the User Mode Driver mechanism, hence reusing
all the user space infra we have.
- Support sleepable BPF programs, initially targeting LSM and tracing.
- Add bpf_d_path() helper for returning full path for given 'struct
path'.
- Make bpf_tail_call compatible with bpf-to-bpf calls.
- Allow BPF programs to call map_update_elem on sockmaps.
- Add BPF Type Format (BTF) support for type and enum discovery, as
well as support for using BTF within the kernel itself (current use
is for pretty printing structures).
- Support listing and getting information about bpf_links via the bpf
syscall.
- Enhance kernel interfaces around NIC firmware update. Allow
specifying overwrite mask to control if settings etc. are reset
during update; report expected max time operation may take to users;
support firmware activation without machine reboot incl. limits of
how much impact reset may have (e.g. dropping link or not).
- Extend ethtool configuration interface to report IEEE-standard
counters, to limit the need for per-vendor logic in user space.
- Adopt or extend devlink use for debug, monitoring, fw update in many
drivers (dsa loop, ice, ionic, sja1105, qed, mlxsw, mv88e6xxx,
dpaa2-eth).
- In mlxsw expose critical and emergency SFP module temperature alarms.
Refactor port buffer handling to make the defaults more suitable and
support setting these values explicitly via the DCBNL interface.
- Add XDP support for Intel's igb driver.
- Support offloading TC flower classification and filtering rules to
mscc_ocelot switches.
- Add PTP support for Marvell Octeontx2 and PP2.2 hardware, as well as
fixed interval period pulse generator and one-step timestamping in
dpaa-eth.
- Add support for various auth offloads in WiFi APs, e.g. SAE (WPA3)
offload.
- Add Lynx PHY/PCS MDIO module, and convert various drivers which have
this HW to use it. Convert mvpp2 to split PCS.
- Support Marvell Prestera 98DX3255 24-port switch ASICs, as well as
7-port Mediatek MT7531 IP.
- Add initial support for QCA6390 and IPQ6018 in ath11k WiFi driver,
and wcn3680 support in wcn36xx.
- Improve performance for packets which don't require much offloads on
recent Mellanox NICs by 20% by making multiple packets share a
descriptor entry.
- Move chelsio inline crypto drivers (for TLS and IPsec) from the
crypto subtree to drivers/net. Move MDIO drivers out of the phy
directory.
- Clean up a lot of W=1 warnings, reportedly the actively developed
subsections of networking drivers should now build W=1 warning free.
- Make sure drivers don't use in_interrupt() to dynamically adapt their
code. Convert tasklets to use new tasklet_setup API (sadly this
conversion is not yet complete).
* tag 'net-next-5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2583 commits)
Revert "bpfilter: Fix build error with CONFIG_BPFILTER_UMH"
net, sockmap: Don't call bpf_prog_put() on NULL pointer
bpf, selftest: Fix flaky tcp_hdr_options test when adding addr to lo
bpf, sockmap: Add locking annotations to iterator
netfilter: nftables: allow re-computing sctp CRC-32C in 'payload' statements
net: fix pos incrementment in ipv6_route_seq_next
net/smc: fix invalid return code in smcd_new_buf_create()
net/smc: fix valid DMBE buffer sizes
net/smc: fix use-after-free of delayed events
bpfilter: Fix build error with CONFIG_BPFILTER_UMH
cxgb4/ch_ipsec: Replace the module name to ch_ipsec from chcr
net: sched: Fix suspicious RCU usage while accessing tcf_tunnel_info
bpf: Fix register equivalence tracking.
rxrpc: Fix loss of final ack on shutdown
rxrpc: Fix bundle counting for exclusive connections
netfilter: restore NF_INET_NUMHOOKS
ibmveth: Identify ingress large send packets.
ibmveth: Switch order of ibmveth_helper calls.
cxgb4: handle 4-tuple PEDIT to NAT mode translation
selftests: Add VRF route leaking tests
...
Diffstat (limited to 'drivers/net/ethernet/smsc')
-rw-r--r-- | drivers/net/ethernet/smsc/epic100.c | 71 | ||||
-rw-r--r-- | drivers/net/ethernet/smsc/smc91x.c | 13 | ||||
-rw-r--r-- | drivers/net/ethernet/smsc/smsc911x.c | 6 | ||||
-rw-r--r-- | drivers/net/ethernet/smsc/smsc9420.c | 51 |
4 files changed, 78 insertions, 63 deletions
diff --git a/drivers/net/ethernet/smsc/epic100.c b/drivers/net/ethernet/smsc/epic100.c index d950b312c418..51cd7dca91cd 100644 --- a/drivers/net/ethernet/smsc/epic100.c +++ b/drivers/net/ethernet/smsc/epic100.c @@ -374,13 +374,15 @@ static int epic_init_one(struct pci_dev *pdev, const struct pci_device_id *ent) ep->mii.phy_id_mask = 0x1f; ep->mii.reg_num_mask = 0x1f; - ring_space = pci_alloc_consistent(pdev, TX_TOTAL_SIZE, &ring_dma); + ring_space = dma_alloc_coherent(&pdev->dev, TX_TOTAL_SIZE, &ring_dma, + GFP_KERNEL); if (!ring_space) goto err_out_iounmap; ep->tx_ring = ring_space; ep->tx_ring_dma = ring_dma; - ring_space = pci_alloc_consistent(pdev, RX_TOTAL_SIZE, &ring_dma); + ring_space = dma_alloc_coherent(&pdev->dev, RX_TOTAL_SIZE, &ring_dma, + GFP_KERNEL); if (!ring_space) goto err_out_unmap_tx; ep->rx_ring = ring_space; @@ -493,9 +495,11 @@ out: return ret; err_out_unmap_rx: - pci_free_consistent(pdev, RX_TOTAL_SIZE, ep->rx_ring, ep->rx_ring_dma); + dma_free_coherent(&pdev->dev, RX_TOTAL_SIZE, ep->rx_ring, + ep->rx_ring_dma); err_out_unmap_tx: - pci_free_consistent(pdev, TX_TOTAL_SIZE, ep->tx_ring, ep->tx_ring_dma); + dma_free_coherent(&pdev->dev, TX_TOTAL_SIZE, ep->tx_ring, + ep->tx_ring_dma); err_out_iounmap: pci_iounmap(pdev, ioaddr); err_out_free_netdev: @@ -918,8 +922,10 @@ static void epic_init_ring(struct net_device *dev) if (skb == NULL) break; skb_reserve(skb, 2); /* 16 byte align the IP header. */ - ep->rx_ring[i].bufaddr = pci_map_single(ep->pci_dev, - skb->data, ep->rx_buf_sz, PCI_DMA_FROMDEVICE); + ep->rx_ring[i].bufaddr = dma_map_single(&ep->pci_dev->dev, + skb->data, + ep->rx_buf_sz, + DMA_FROM_DEVICE); ep->rx_ring[i].rxstatus = DescOwn; } ep->dirty_rx = (unsigned int)(i - RX_RING_SIZE); @@ -955,8 +961,9 @@ static netdev_tx_t epic_start_xmit(struct sk_buff *skb, struct net_device *dev) entry = ep->cur_tx % TX_RING_SIZE; ep->tx_skbuff[entry] = skb; - ep->tx_ring[entry].bufaddr = pci_map_single(ep->pci_dev, skb->data, - skb->len, PCI_DMA_TODEVICE); + ep->tx_ring[entry].bufaddr = dma_map_single(&ep->pci_dev->dev, + skb->data, skb->len, + DMA_TO_DEVICE); if (free_count < TX_QUEUE_LEN/2) {/* Typical path */ ctrl_word = 0x100000; /* No interrupt */ } else if (free_count == TX_QUEUE_LEN/2) { @@ -1036,8 +1043,9 @@ static void epic_tx(struct net_device *dev, struct epic_private *ep) /* Free the original skb. */ skb = ep->tx_skbuff[entry]; - pci_unmap_single(ep->pci_dev, ep->tx_ring[entry].bufaddr, - skb->len, PCI_DMA_TODEVICE); + dma_unmap_single(&ep->pci_dev->dev, + ep->tx_ring[entry].bufaddr, skb->len, + DMA_TO_DEVICE); dev_consume_skb_irq(skb); ep->tx_skbuff[entry] = NULL; } @@ -1178,20 +1186,21 @@ static int epic_rx(struct net_device *dev, int budget) if (pkt_len < rx_copybreak && (skb = netdev_alloc_skb(dev, pkt_len + 2)) != NULL) { skb_reserve(skb, 2); /* 16 byte align the IP header */ - pci_dma_sync_single_for_cpu(ep->pci_dev, - ep->rx_ring[entry].bufaddr, - ep->rx_buf_sz, - PCI_DMA_FROMDEVICE); + dma_sync_single_for_cpu(&ep->pci_dev->dev, + ep->rx_ring[entry].bufaddr, + ep->rx_buf_sz, + DMA_FROM_DEVICE); skb_copy_to_linear_data(skb, ep->rx_skbuff[entry]->data, pkt_len); skb_put(skb, pkt_len); - pci_dma_sync_single_for_device(ep->pci_dev, - ep->rx_ring[entry].bufaddr, - ep->rx_buf_sz, - PCI_DMA_FROMDEVICE); + dma_sync_single_for_device(&ep->pci_dev->dev, + ep->rx_ring[entry].bufaddr, + ep->rx_buf_sz, + DMA_FROM_DEVICE); } else { - pci_unmap_single(ep->pci_dev, - ep->rx_ring[entry].bufaddr, - ep->rx_buf_sz, PCI_DMA_FROMDEVICE); + dma_unmap_single(&ep->pci_dev->dev, + ep->rx_ring[entry].bufaddr, + ep->rx_buf_sz, + DMA_FROM_DEVICE); skb_put(skb = ep->rx_skbuff[entry], pkt_len); ep->rx_skbuff[entry] = NULL; } @@ -1213,8 +1222,10 @@ static int epic_rx(struct net_device *dev, int budget) if (skb == NULL) break; skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */ - ep->rx_ring[entry].bufaddr = pci_map_single(ep->pci_dev, - skb->data, ep->rx_buf_sz, PCI_DMA_FROMDEVICE); + ep->rx_ring[entry].bufaddr = dma_map_single(&ep->pci_dev->dev, + skb->data, + ep->rx_buf_sz, + DMA_FROM_DEVICE); work_done++; } /* AV: shouldn't we add a barrier here? */ @@ -1294,8 +1305,8 @@ static int epic_close(struct net_device *dev) ep->rx_ring[i].rxstatus = 0; /* Not owned by Epic chip. */ ep->rx_ring[i].buflength = 0; if (skb) { - pci_unmap_single(pdev, ep->rx_ring[i].bufaddr, - ep->rx_buf_sz, PCI_DMA_FROMDEVICE); + dma_unmap_single(&pdev->dev, ep->rx_ring[i].bufaddr, + ep->rx_buf_sz, DMA_FROM_DEVICE); dev_kfree_skb(skb); } ep->rx_ring[i].bufaddr = 0xBADF00D0; /* An invalid address. */ @@ -1305,8 +1316,8 @@ static int epic_close(struct net_device *dev) ep->tx_skbuff[i] = NULL; if (!skb) continue; - pci_unmap_single(pdev, ep->tx_ring[i].bufaddr, skb->len, - PCI_DMA_TODEVICE); + dma_unmap_single(&pdev->dev, ep->tx_ring[i].bufaddr, skb->len, + DMA_TO_DEVICE); dev_kfree_skb(skb); } @@ -1502,8 +1513,10 @@ static void epic_remove_one(struct pci_dev *pdev) struct net_device *dev = pci_get_drvdata(pdev); struct epic_private *ep = netdev_priv(dev); - pci_free_consistent(pdev, TX_TOTAL_SIZE, ep->tx_ring, ep->tx_ring_dma); - pci_free_consistent(pdev, RX_TOTAL_SIZE, ep->rx_ring, ep->rx_ring_dma); + dma_free_coherent(&pdev->dev, TX_TOTAL_SIZE, ep->tx_ring, + ep->tx_ring_dma); + dma_free_coherent(&pdev->dev, RX_TOTAL_SIZE, ep->rx_ring, + ep->rx_ring_dma); unregister_netdev(dev); pci_iounmap(pdev, ep->ioaddr); pci_release_regions(pdev); diff --git a/drivers/net/ethernet/smsc/smc91x.c b/drivers/net/ethernet/smsc/smc91x.c index 1c4fea9c3ec4..f6b73afd1879 100644 --- a/drivers/net/ethernet/smsc/smc91x.c +++ b/drivers/net/ethernet/smsc/smc91x.c @@ -535,10 +535,10 @@ static inline void smc_rcv(struct net_device *dev) /* * This is called to actually send a packet to the chip. */ -static void smc_hardware_send_pkt(unsigned long data) +static void smc_hardware_send_pkt(struct tasklet_struct *t) { - struct net_device *dev = (struct net_device *)data; - struct smc_local *lp = netdev_priv(dev); + struct smc_local *lp = from_tasklet(lp, t, tx_task); + struct net_device *dev = lp->dev; void __iomem *ioaddr = lp->base; struct sk_buff *skb; unsigned int packet_no, len; @@ -688,7 +688,7 @@ smc_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) * Allocation succeeded: push packet to the chip's own memory * immediately. */ - smc_hardware_send_pkt((unsigned long)dev); + smc_hardware_send_pkt(&lp->tx_task); } return NETDEV_TX_OK; @@ -1036,7 +1036,6 @@ static void smc_phy_configure(struct work_struct *work) int phyaddr = lp->mii.phy_id; int my_phy_caps; /* My PHY capabilities */ int my_ad_caps; /* My Advertised capabilities */ - int status; DBG(3, dev, "smc_program_phy()\n"); @@ -1110,7 +1109,7 @@ static void smc_phy_configure(struct work_struct *work) * auto-negotiation is restarted, sometimes it isn't ready and * the link does not come up. */ - status = smc_phy_read(dev, phyaddr, MII_ADVERTISE); + smc_phy_read(dev, phyaddr, MII_ADVERTISE); DBG(2, dev, "phy caps=%x\n", my_phy_caps); DBG(2, dev, "phy advertised caps=%x\n", my_ad_caps); @@ -1965,7 +1964,7 @@ static int smc_probe(struct net_device *dev, void __iomem *ioaddr, dev->netdev_ops = &smc_netdev_ops; dev->ethtool_ops = &smc_ethtool_ops; - tasklet_init(&lp->tx_task, smc_hardware_send_pkt, (unsigned long)dev); + tasklet_setup(&lp->tx_task, smc_hardware_send_pkt); INIT_WORK(&lp->phy_configure, smc_phy_configure); lp->dev = dev; lp->mii.phy_id_mask = 0x1f; diff --git a/drivers/net/ethernet/smsc/smsc911x.c b/drivers/net/ethernet/smsc/smsc911x.c index fc168f85e7af..823d9a7184fe 100644 --- a/drivers/net/ethernet/smsc/smsc911x.c +++ b/drivers/net/ethernet/smsc/smsc911x.c @@ -1196,9 +1196,8 @@ smsc911x_rx_fastforward(struct smsc911x_data *pdata, unsigned int pktwords) SMSC_WARN(pdata, hw, "Timed out waiting for " "RX FFWD to finish, RX_DP_CTRL: 0x%08X", val); } else { - unsigned int temp; while (pktwords--) - temp = smsc911x_reg_read(pdata, RX_DATA_FIFO); + smsc911x_reg_read(pdata, RX_DATA_FIFO); } } @@ -2055,7 +2054,6 @@ static int smsc911x_eeprom_write_location(struct smsc911x_data *pdata, u8 address, u8 data) { u32 op = E2P_CMD_EPC_CMD_ERASE_ | address; - u32 temp; int ret; SMSC_TRACE(pdata, drv, "address 0x%x, data 0x%x", address, data); @@ -2066,7 +2064,7 @@ static int smsc911x_eeprom_write_location(struct smsc911x_data *pdata, smsc911x_reg_write(pdata, E2P_DATA, (u32)data); /* Workaround for hardware read-after-write restriction */ - temp = smsc911x_reg_read(pdata, BYTE_TEST); + smsc911x_reg_read(pdata, BYTE_TEST); ret = smsc911x_eeprom_send_cmd(pdata, op); } diff --git a/drivers/net/ethernet/smsc/smsc9420.c b/drivers/net/ethernet/smsc/smsc9420.c index 42bef04d65ba..c1dab009415d 100644 --- a/drivers/net/ethernet/smsc/smsc9420.c +++ b/drivers/net/ethernet/smsc/smsc9420.c @@ -497,8 +497,9 @@ static void smsc9420_free_tx_ring(struct smsc9420_pdata *pd) if (skb) { BUG_ON(!pd->tx_buffers[i].mapping); - pci_unmap_single(pd->pdev, pd->tx_buffers[i].mapping, - skb->len, PCI_DMA_TODEVICE); + dma_unmap_single(&pd->pdev->dev, + pd->tx_buffers[i].mapping, skb->len, + DMA_TO_DEVICE); dev_kfree_skb_any(skb); } @@ -530,8 +531,9 @@ static void smsc9420_free_rx_ring(struct smsc9420_pdata *pd) dev_kfree_skb_any(pd->rx_buffers[i].skb); if (pd->rx_buffers[i].mapping) - pci_unmap_single(pd->pdev, pd->rx_buffers[i].mapping, - PKT_BUF_SZ, PCI_DMA_FROMDEVICE); + dma_unmap_single(&pd->pdev->dev, + pd->rx_buffers[i].mapping, + PKT_BUF_SZ, DMA_FROM_DEVICE); pd->rx_ring[i].status = 0; pd->rx_ring[i].length = 0; @@ -749,8 +751,8 @@ static void smsc9420_rx_handoff(struct smsc9420_pdata *pd, const int index, dev->stats.rx_packets++; dev->stats.rx_bytes += packet_length; - pci_unmap_single(pd->pdev, pd->rx_buffers[index].mapping, - PKT_BUF_SZ, PCI_DMA_FROMDEVICE); + dma_unmap_single(&pd->pdev->dev, pd->rx_buffers[index].mapping, + PKT_BUF_SZ, DMA_FROM_DEVICE); pd->rx_buffers[index].mapping = 0; skb = pd->rx_buffers[index].skb; @@ -782,9 +784,9 @@ static int smsc9420_alloc_rx_buffer(struct smsc9420_pdata *pd, int index) if (unlikely(!skb)) return -ENOMEM; - mapping = pci_map_single(pd->pdev, skb_tail_pointer(skb), - PKT_BUF_SZ, PCI_DMA_FROMDEVICE); - if (pci_dma_mapping_error(pd->pdev, mapping)) { + mapping = dma_map_single(&pd->pdev->dev, skb_tail_pointer(skb), + PKT_BUF_SZ, DMA_FROM_DEVICE); + if (dma_mapping_error(&pd->pdev->dev, mapping)) { dev_kfree_skb_any(skb); netif_warn(pd, rx_err, pd->dev, "pci_map_single failed!\n"); return -ENOMEM; @@ -901,8 +903,10 @@ static void smsc9420_complete_tx(struct net_device *dev) BUG_ON(!pd->tx_buffers[index].skb); BUG_ON(!pd->tx_buffers[index].mapping); - pci_unmap_single(pd->pdev, pd->tx_buffers[index].mapping, - pd->tx_buffers[index].skb->len, PCI_DMA_TODEVICE); + dma_unmap_single(&pd->pdev->dev, + pd->tx_buffers[index].mapping, + pd->tx_buffers[index].skb->len, + DMA_TO_DEVICE); pd->tx_buffers[index].mapping = 0; dev_kfree_skb_any(pd->tx_buffers[index].skb); @@ -932,9 +936,9 @@ static netdev_tx_t smsc9420_hard_start_xmit(struct sk_buff *skb, BUG_ON(pd->tx_buffers[index].skb); BUG_ON(pd->tx_buffers[index].mapping); - mapping = pci_map_single(pd->pdev, skb->data, - skb->len, PCI_DMA_TODEVICE); - if (pci_dma_mapping_error(pd->pdev, mapping)) { + mapping = dma_map_single(&pd->pdev->dev, skb->data, skb->len, + DMA_TO_DEVICE); + if (dma_mapping_error(&pd->pdev->dev, mapping)) { netif_warn(pd, tx_err, pd->dev, "pci_map_single failed, dropping packet\n"); return NETDEV_TX_BUSY; @@ -1522,7 +1526,7 @@ smsc9420_probe(struct pci_dev *pdev, const struct pci_device_id *id) goto out_free_netdev_2; } - if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) { + if (dma_set_mask(&pdev->dev, DMA_BIT_MASK(32))) { netdev_err(dev, "No usable DMA configuration, aborting\n"); goto out_free_regions_3; } @@ -1540,10 +1544,9 @@ smsc9420_probe(struct pci_dev *pdev, const struct pci_device_id *id) pd = netdev_priv(dev); /* pci descriptors are created in the PCI consistent area */ - pd->rx_ring = pci_alloc_consistent(pdev, - sizeof(struct smsc9420_dma_desc) * RX_RING_SIZE + - sizeof(struct smsc9420_dma_desc) * TX_RING_SIZE, - &pd->rx_dma_addr); + pd->rx_ring = dma_alloc_coherent(&pdev->dev, + sizeof(struct smsc9420_dma_desc) * (RX_RING_SIZE + TX_RING_SIZE), + &pd->rx_dma_addr, GFP_KERNEL); if (!pd->rx_ring) goto out_free_io_4; @@ -1599,8 +1602,9 @@ smsc9420_probe(struct pci_dev *pdev, const struct pci_device_id *id) return 0; out_free_dmadesc_5: - pci_free_consistent(pdev, sizeof(struct smsc9420_dma_desc) * - (RX_RING_SIZE + TX_RING_SIZE), pd->rx_ring, pd->rx_dma_addr); + dma_free_coherent(&pdev->dev, + sizeof(struct smsc9420_dma_desc) * (RX_RING_SIZE + TX_RING_SIZE), + pd->rx_ring, pd->rx_dma_addr); out_free_io_4: iounmap(virt_addr - LAN9420_CPSR_ENDIAN_OFFSET); out_free_regions_3: @@ -1632,8 +1636,9 @@ static void smsc9420_remove(struct pci_dev *pdev) BUG_ON(!pd->tx_ring); BUG_ON(!pd->rx_ring); - pci_free_consistent(pdev, sizeof(struct smsc9420_dma_desc) * - (RX_RING_SIZE + TX_RING_SIZE), pd->rx_ring, pd->rx_dma_addr); + dma_free_coherent(&pdev->dev, + sizeof(struct smsc9420_dma_desc) * (RX_RING_SIZE + TX_RING_SIZE), + pd->rx_ring, pd->rx_dma_addr); iounmap(pd->ioaddr - LAN9420_CPSR_ENDIAN_OFFSET); pci_release_regions(pdev); |