Merge tag 'ASB-2022-03-05_4.19-stable' of https://github.com/aosp-mirror/kernel_common into android12-base
https://source.android.com/security/bulletin/2022-03-01 CVE-2020-29368 CVE-2021-39685 CVE-2021-39686 CVE-2021-39698 CVE-2021-3655 * tag 'ASB-2022-03-05_4.19-stable' of https://github.com/aosp-mirror/kernel_common: Linux 4.19.232 tty: n_gsm: fix encoding of control signal octet bit DV xhci: Prevent futile URB re-submissions due to incorrect return value. xhci: re-initialize the HC during resume if HCE was set usb: dwc3: gadget: Let the interrupt handler disable bottom halves. usb: dwc3: pci: Fix Bay Trail phy GPIO mappings USB: serial: option: add Telit LE910R1 compositions USB: serial: option: add support for DW5829e tracefs: Set the group ownership in apply_options() not parse_options() USB: gadget: validate endpoint index for xilinx udc usb: gadget: rndis: add spinlock for rndis response list Revert "USB: serial: ch341: add new Product ID for CH341A" ata: pata_hpt37x: disable primary channel on HPT371 iio: adc: men_z188_adc: Fix a resource leak in an error handling path tracing: Have traceon and traceoff trigger honor the instance fget: clarify and improve __fget_files() implementation memblock: use kfree() to release kmalloced memblock regions Revert "drm/nouveau/pmu/gm200-: avoid touching PMU outside of DEVINIT/PREOS/ACR" gpio: tegra186: Fix chip_data type confusion tty: n_gsm: fix proper link termination after failed open RDMA/ib_srp: Fix a deadlock configfs: fix a race in configfs_{,un}register_subsystem() net/mlx5e: Fix wrong return value on ioctl EEPROM query failure drm/edid: Always set RGB444 openvswitch: Fix setting ipv6 fields causing hw csum failure gso: do not skip outer ip header in case of ipip and net_failover tipc: Fix end of loop tests for list_for_each_entry() net: __pskb_pull_tail() & pskb_carve_frag_list() drop_monitor friends ping: remove pr_err from ping_lookup USB: zaurus: support another broken Zaurus sr9700: sanity check for packet length parisc/unaligned: Fix ldw() and stw() unalignment handlers parisc/unaligned: Fix fldd and fstd unaligned handlers on 32-bit kernel vhost/vsock: don't check owner in vhost_vsock_stop() while releasing cgroup/cpuset: Fix a race between cpuset_attach() and cpu hotplug Linux 4.19.231 net: macb: Align the dma and coherent dma masks net: usb: qmi_wwan: Add support for Dell DW5829e tracing: Fix tp_printk option related with tp_printk_stop_on_boot ata: libata-core: Disable TRIM on M88V29 kconfig: let 'shell' return enough output for deep path names arm64: dts: meson-gx: add ATF BL32 reserved-memory region netfilter: conntrack: don't refresh sctp entries in closed state irqchip/sifive-plic: Add missing thead,c900-plic match string ARM: OMAP2+: hwmod: Add of_node_put() before break KVM: x86/pmu: Use AMD64_RAW_EVENT_MASK for PERF_TYPE_RAW Drivers: hv: vmbus: Fix memory leak in vmbus_add_channel_kobj Drivers: hv: vmbus: Expose monitor data only when monitor pages are used mtd: rawnand: brcmnand: Fixed incorrect sub-page ECC status mtd: rawnand: brcmnand: Refactored code to introduce helper functions lib/iov_iter: initialize "flags" in new pipe_buffer i2c: brcmstb: fix support for DSL and CM variants dmaengine: sh: rcar-dmac: Check for error num after setting mask net: sched: limit TC_ACT_REPEAT loops EDAC: Fix calculation of returned address and next offset in edac_align_ptr() mtd: rawnand: qcom: Fix clock sequencing in qcom_nandc_probe() NFS: Do not report writeback errors in nfs_getattr() NFS: LOOKUP_DIRECTORY is also ok with symlinks block/wbt: fix negative inflight counter when remove scsi device ext4: check for out-of-order index extents in ext4_valid_extent_entries() powerpc/lib/sstep: fix 'ptesync' build error ASoC: ops: Fix stereo change notifications in snd_soc_put_volsw_range() ASoC: ops: Fix stereo change notifications in snd_soc_put_volsw() ALSA: hda: Fix missing codec probe on Shenker Dock 15 ALSA: hda: Fix regression on forced probe mask option libsubcmd: Fix use-after-free for realloc(..., 0) bonding: fix data-races around agg_select_timer drop_monitor: fix data-race in dropmon_net_event / trace_napi_poll_hit ping: fix the dif and sdif check in ping_lookup net: ieee802154: ca8210: Fix lifs/sifs periods net: dsa: lan9303: fix reset on probe iwlwifi: pcie: gen2: fix locking when "HW not ready" iwlwifi: pcie: fix locking when "HW not ready" vsock: remove vsock from connected table when connect is interrupted by a signal mmc: block: fix read single on recovery logic taskstats: Cleanup the use of task->exit_code xfrm: Don't accidentally set RTO_ONLINK in decode_session4() drm/radeon: Fix backlight control on iMac 12,1 iwlwifi: fix use-after-free Revert "module, async: async_synchronize_full() on module init iff async is used" nvme-rdma: fix possible use-after-free in transport error_recovery work nvme: fix a possible use-after-free in controller reset during load quota: make dquot_quota_sync return errors from ->sync_fs vfs: make freeze_super abort when sync_filesystem returns error ax25: improve the incomplete fix to avoid UAF and NPD bugs selftests/zram: Adapt the situation that /dev/zram0 is being used selftests/zram01.sh: Fix compression ratio calculation selftests/zram: Skip max_comp_streams interface on newer kernel net: ieee802154: at86rf230: Stop leaking skb's btrfs: send: in case of IO error log it parisc: Fix sglist access in ccio-dma.c parisc: Fix data TLB miss in sba_unmap_sg serial: parisc: GSC: fix build when IOSAPIC is not set net: usb: ax88179_178a: Fix out-of-bounds accesses in RX fixup Makefile.extrawarn: Move -Wunaligned-access to W=1 Linux 4.19.230 perf: Fix list corruption in perf_cgroup_switch() hwmon: (dell-smm) Speed up setting of fan speed seccomp: Invalidate seccomp mode to catch death failures USB: serial: cp210x: add CPI Bulk Coin Recycler id USB: serial: cp210x: add NCR Retail IO box id USB: serial: ch341: add support for GW Instek USB2.0-Serial devices USB: serial: option: add ZTE MF286D modem USB: serial: ftdi_sio: add support for Brainboxes US-159/235/320 usb: gadget: rndis: check size of RNDIS_MSG_SET command USB: gadget: validate interface OS descriptor requests usb: dwc3: gadget: Prevent core from processing stale TRBs usb: ulpi: Call of_node_put correctly usb: ulpi: Move of_node_put to ulpi_dev_release n_tty: wake up poll(POLLRDNORM) on receiving data vt_ioctl: add array_index_nospec to VT_ACTIVATE vt_ioctl: fix array_index_nospec in vt_setactivate net: amd-xgbe: disable interrupts during pci removal tipc: rate limit warning for received illegal binding update veth: fix races around rq->rx_notify_masked net: fix a memleak when uncloning an skb dst and its metadata net: do not keep the dst cache when uncloning an skb dst and its metadata ipmr,ip6mr: acquire RTNL before calling ip[6]mr_free_table() on failure path bonding: pair enable_port with slave_arr_updates ixgbevf: Require large buffers for build_skb on 82599VF usb: f_fs: Fix use-after-free for epfile ARM: dts: imx6qdl-udoo: Properly describe the SD card detect staging: fbtft: Fix error path in fbtft_driver_module_init() ARM: dts: meson: Fix the UART compatible strings perf probe: Fix ppc64 'perf probe add events failed' case net: bridge: fix stale eth hdr pointer in br_dev_xmit ARM: dts: imx23-evk: Remove MX23_PAD_SSP1_DETECT from hog group bpf: Add kconfig knob for disabling unpriv bpf by default net: stmmac: dwmac-sun8i: use return val of readl_poll_timeout() usb: dwc2: gadget: don't try to disable ep0 in dwc2_hsotg_suspend scsi: target: iscsi: Make sure the np under each tpg is unique net: sched: Clarify error message when qdisc kind is unknown NFSv4 expose nfs_parse_server_name function NFSv4 remove zero number of fs_locations entries error check NFSv4.1: Fix uninitialised variable in devicenotify nfs: nfs4clinet: check the return value of kstrdup() NFSv4 only print the label when its queried NFSD: Fix offset type in I/O trace points NFSD: Clamp WRITE offsets NFS: Fix initialisation of nfs_client cl_flags field net: phy: marvell: Fix MDI-x polarity setting in 88e1118-compatible PHYs mmc: sdhci-of-esdhc: Check for error num after setting mask ima: Allow template selection with ima_template[_fmt]= after ima_hash= ima: Remove ima_policy file before directory integrity: check the return value of audit_log_start() FROMGIT: f2fs: avoid EINVAL by SBI_NEED_FSCK when pinning a file Revert "tracefs: Have tracefs directories not set OTH permission bits by default" ANDROID: GKI: Enable CONFIG_SERIAL_8250_RUNTIME_UARTS=0 Linux 4.19.229 tipc: improve size validations for received domain records moxart: fix potential use-after-free on remove path cgroup-v1: Require capabilities to set release_agent Linux 4.19.228 ext4: fix error handling in ext4_restore_inline_data() EDAC/xgene: Fix deferred probing EDAC/altera: Fix deferred probing rtc: cmos: Evaluate century appropriate selftests: futex: Use variable MAKE instead of make nfsd: nfsd4_setclientid_confirm mistakenly expires confirmed client. scsi: bnx2fc: Make bnx2fc_recv_frame() mp safe ASoC: max9759: fix underflow in speaker_gain_control_put() ASoC: cpcap: Check for NULL pointer after calling of_get_child_by_name ASoC: fsl: Add missing error handling in pcm030_fabric_probe drm/i915/overlay: Prevent divide by zero bugs in scaling net: stmmac: ensure PTP time register reads are consistent net: macsec: Verify that send_sci is on when setting Tx sci explicitly net: ieee802154: Return meaningful error codes from the netlink helpers net: ieee802154: ca8210: Stop leaking skb's net: ieee802154: mcr20a: Fix lifs/sifs periods net: ieee802154: hwsim: Ensure proper channel selection at probe time spi: meson-spicc: add IRQ check in meson_spicc_probe spi: mediatek: Avoid NULL pointer crash in interrupt spi: bcm-qspi: check for valid cs before applying chip select iommu/amd: Fix loop timeout issue in iommu_ga_log_enable() iommu/vt-d: Fix potential memory leak in intel_setup_irq_remapping() RDMA/mlx4: Don't continue event handler after memory allocation failure Revert "ASoC: mediatek: Check for error clk pointer" block: bio-integrity: Advance seed correctly for larger interval sizes drm/nouveau: fix off by one in BIOS boundary checking ALSA: hda/realtek: Fix silent output on Gigabyte X570 Aorus Xtreme after reboot from Windows ALSA: hda/realtek: Fix silent output on Gigabyte X570S Aorus Master (newer chipset) ALSA: hda/realtek: Add missing fixup-model entry for Gigabyte X570 ALC1220 quirks ASoC: ops: Reject out of bounds values in snd_soc_put_xr_sx() ASoC: ops: Reject out of bounds values in snd_soc_put_volsw_sx() ASoC: ops: Reject out of bounds values in snd_soc_put_volsw() audit: improve audit queue handling when "audit=1" on cmdline af_packet: fix data-race in packet_setsockopt / packet_setsockopt rtnetlink: make sure to refresh master_dev/m_ops in __rtnl_newlink() net: amd-xgbe: Fix skb data length underflow net: amd-xgbe: ensure to reset the tx_timer_active flag ipheth: fix EOVERFLOW in ipheth_rcvbulk_callback tcp: fix possible socket leaks in internal pacing mode netfilter: nat: limit port clash resolution attempts netfilter: nat: remove l4 protocol port rovers ipv4: tcp: send zero IPID in SYNACK messages ipv4: raw: lock the socket in raw_bind() yam: fix a memory leak in yam_siocdevprivate() ibmvnic: don't spin in tasklet ibmvnic: init ->running_cap_crqs early phylib: fix potential use-after-free NFS: Ensure the server has an up to date ctime before renaming NFS: Ensure the server has an up to date ctime before hardlinking ipv6: annotate accesses to fn->fn_sernum drm/msm/dsi: invalid parameter check in msm_dsi_phy_enable drm/msm: Fix wrong size calculation net-procfs: show net devices bound packet types NFSv4: nfs_atomic_open() can race when looking up a non-regular file NFSv4: Handle case where the lookup of a directory fails hwmon: (lm90) Reduce maximum conversion rate for G781 ipv4: avoid using shared IP generator for connected sockets ping: fix the sk_bound_dev_if match in ping_lookup net: fix information leakage in /proc/net/ptype ipv6_tunnel: Rate limit warning messages scsi: bnx2fc: Flush destroy_work queue before calling bnx2fc_interface_put() rpmsg: char: Fix race between the release of rpmsg_eptdev and cdev rpmsg: char: Fix race between the release of rpmsg_ctrldev and cdev i40e: fix unsigned stat widths i40e: Fix queues reservation for XDP i40e: Fix issue when maximum queues is exceeded i40e: Increase delay to 1 s after global EMP reset powerpc/32: Fix boot failure with GCC latent entropy plugin net: sfp: ignore disabled SFP node usb: typec: tcpm: Do not disconnect while receiving VBUS off USB: core: Fix hang in usb_kill_urb by adding memory barriers usb: gadget: f_sourcesink: Fix isoc transfer for USB_SPEED_SUPER_PLUS usb: common: ulpi: Fix crash in ulpi_match() usb-storage: Add unusual-devs entry for VL817 USB-SATA bridge tty: Add support for Brainboxes UC cards. tty: n_gsm: fix SW flow control encoding/handling serial: stm32: fix software flow control transfer serial: 8250: of: Fix mapped region size when using reg-offset property netfilter: nft_payload: do not update layer 4 checksum when mangling fragments drm/etnaviv: relax submit size limits PM: wakeup: simplify the output logic of pm_show_wakelocks() udf: Fix NULL ptr deref when converting from inline format udf: Restore i_lenAlloc when inode expansion fails scsi: zfcp: Fix failed recovery on gone remote port with non-NPIV FCP devices s390/hypfs: include z/VM guests with access control group set Bluetooth: refactor malicious adv data check ANDROID: Increase x86 cmdline size to 4k Change-Id: Icc3c578b223a53feb469666071df2e1715fa8698 Signed-off-by: UtsavBalar1231 <utsavbalar1231@gmail.com> Conflicts: drivers/usb/dwc3/gadget.c drivers/usb/gadget/function/f_fs.c
This commit is contained in:
commit
61bba85e05
@ -81,7 +81,9 @@ What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/latency
|
||||
Date: September. 2017
|
||||
KernelVersion: 4.14
|
||||
Contact: Stephen Hemminger <sthemmin@microsoft.com>
|
||||
Description: Channel signaling latency
|
||||
Description: Channel signaling latency. This file is available only for
|
||||
performance critical channels (storage, network, etc.) that use
|
||||
the monitor page mechanism.
|
||||
Users: Debugging tools
|
||||
|
||||
What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/out_mask
|
||||
@ -95,7 +97,9 @@ What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/pending
|
||||
Date: September. 2017
|
||||
KernelVersion: 4.14
|
||||
Contact: Stephen Hemminger <sthemmin@microsoft.com>
|
||||
Description: Channel interrupt pending state
|
||||
Description: Channel interrupt pending state. This file is available only for
|
||||
performance critical channels (storage, network, etc.) that use
|
||||
the monitor page mechanism.
|
||||
Users: Debugging tools
|
||||
|
||||
What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/read_avail
|
||||
@ -137,7 +141,9 @@ What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/monitor_id
|
||||
Date: January. 2018
|
||||
KernelVersion: 4.16
|
||||
Contact: Stephen Hemminger <sthemmin@microsoft.com>
|
||||
Description: Monitor bit associated with channel
|
||||
Description: Monitor bit associated with channel. This file is available only
|
||||
for performance critical channels (storage, network, etc.) that
|
||||
use the monitor page mechanism.
|
||||
Users: Debugging tools and userspace drivers
|
||||
|
||||
What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/ring
|
||||
|
@ -97,6 +97,7 @@ show up in /proc/sys/kernel:
|
||||
- sysctl_writes_strict
|
||||
- tainted
|
||||
- threads-max
|
||||
- unprivileged_bpf_disabled
|
||||
- unknown_nmi_panic
|
||||
- watchdog
|
||||
- watchdog_thresh
|
||||
@ -1079,6 +1080,26 @@ available RAM pages threads-max is reduced accordingly.
|
||||
|
||||
==============================================================
|
||||
|
||||
unprivileged_bpf_disabled:
|
||||
|
||||
Writing 1 to this entry will disable unprivileged calls to bpf();
|
||||
once disabled, calling bpf() without CAP_SYS_ADMIN will return
|
||||
-EPERM. Once set to 1, this can't be cleared from the running kernel
|
||||
anymore.
|
||||
|
||||
Writing 2 to this entry will also disable unprivileged calls to bpf(),
|
||||
however, an admin can still change this setting later on, if needed, by
|
||||
writing 0 or 1 to this entry.
|
||||
|
||||
If BPF_UNPRIV_DEFAULT_OFF is enabled in the kernel config, then this
|
||||
entry will default to 2 instead of 0.
|
||||
|
||||
0 - Unprivileged calls to bpf() are enabled
|
||||
1 - Unprivileged calls to bpf() are disabled without recovery
|
||||
2 - Unprivileged calls to bpf() are disabled
|
||||
|
||||
==============================================================
|
||||
|
||||
unknown_nmi_panic:
|
||||
|
||||
The value in this file affects behavior of handling NMI. When the
|
||||
|
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 4
|
||||
PATCHLEVEL = 19
|
||||
SUBLEVEL = 227
|
||||
SUBLEVEL = 232
|
||||
EXTRAVERSION =
|
||||
NAME = "People's Front"
|
||||
|
||||
|
@ -79,7 +79,6 @@
|
||||
MX23_PAD_LCD_RESET__GPIO_1_18
|
||||
MX23_PAD_PWM3__GPIO_1_29
|
||||
MX23_PAD_PWM4__GPIO_1_30
|
||||
MX23_PAD_SSP1_DETECT__SSP1_DETECT
|
||||
>;
|
||||
fsl,drive-strength = <MXS_DRIVE_4mA>;
|
||||
fsl,voltage = <MXS_VOLTAGE_HIGH>;
|
||||
|
@ -5,6 +5,8 @@
|
||||
* Author: Fabio Estevam <fabio.estevam@freescale.com>
|
||||
*/
|
||||
|
||||
#include <dt-bindings/gpio/gpio.h>
|
||||
|
||||
/ {
|
||||
aliases {
|
||||
backlight = &backlight;
|
||||
@ -210,6 +212,7 @@
|
||||
MX6QDL_PAD_SD3_DAT1__SD3_DATA1 0x17059
|
||||
MX6QDL_PAD_SD3_DAT2__SD3_DATA2 0x17059
|
||||
MX6QDL_PAD_SD3_DAT3__SD3_DATA3 0x17059
|
||||
MX6QDL_PAD_SD3_DAT5__GPIO7_IO00 0x1b0b0
|
||||
>;
|
||||
};
|
||||
|
||||
@ -276,7 +279,7 @@
|
||||
&usdhc3 {
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&pinctrl_usdhc3>;
|
||||
non-removable;
|
||||
cd-gpios = <&gpio7 0 GPIO_ACTIVE_LOW>;
|
||||
status = "okay";
|
||||
};
|
||||
|
||||
|
@ -91,14 +91,14 @@
|
||||
};
|
||||
|
||||
uart_A: serial@84c0 {
|
||||
compatible = "amlogic,meson6-uart", "amlogic,meson-uart";
|
||||
compatible = "amlogic,meson6-uart";
|
||||
reg = <0x84c0 0x18>;
|
||||
interrupts = <GIC_SPI 26 IRQ_TYPE_EDGE_RISING>;
|
||||
status = "disabled";
|
||||
};
|
||||
|
||||
uart_B: serial@84dc {
|
||||
compatible = "amlogic,meson6-uart", "amlogic,meson-uart";
|
||||
compatible = "amlogic,meson6-uart";
|
||||
reg = <0x84dc 0x18>;
|
||||
interrupts = <GIC_SPI 75 IRQ_TYPE_EDGE_RISING>;
|
||||
status = "disabled";
|
||||
@ -136,7 +136,7 @@
|
||||
};
|
||||
|
||||
uart_C: serial@8700 {
|
||||
compatible = "amlogic,meson6-uart", "amlogic,meson-uart";
|
||||
compatible = "amlogic,meson6-uart";
|
||||
reg = <0x8700 0x18>;
|
||||
interrupts = <GIC_SPI 93 IRQ_TYPE_EDGE_RISING>;
|
||||
status = "disabled";
|
||||
@ -219,7 +219,7 @@
|
||||
};
|
||||
|
||||
uart_AO: serial@4c0 {
|
||||
compatible = "amlogic,meson6-uart", "amlogic,meson-ao-uart", "amlogic,meson-uart";
|
||||
compatible = "amlogic,meson6-uart", "amlogic,meson-ao-uart";
|
||||
reg = <0x4c0 0x18>;
|
||||
interrupts = <GIC_SPI 90 IRQ_TYPE_EDGE_RISING>;
|
||||
status = "disabled";
|
||||
|
@ -754,8 +754,10 @@ static int __init _init_clkctrl_providers(void)
|
||||
|
||||
for_each_matching_node(np, ti_clkctrl_match_table) {
|
||||
ret = _setup_clkctrl_provider(np);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
of_node_put(np);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
@ -41,6 +41,12 @@
|
||||
no-map;
|
||||
};
|
||||
|
||||
/* 32 MiB reserved for ARM Trusted Firmware (BL32) */
|
||||
secmon_reserved_bl32: secmon@5300000 {
|
||||
reg = <0x0 0x05300000 0x0 0x2000000>;
|
||||
no-map;
|
||||
};
|
||||
|
||||
linux,cma {
|
||||
compatible = "shared-dma-pool";
|
||||
reusable;
|
||||
|
@ -299,6 +299,7 @@ CONFIG_SERIAL_8250=y
|
||||
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
|
||||
CONFIG_SERIAL_8250_CONSOLE=y
|
||||
# CONFIG_SERIAL_8250_EXAR is not set
|
||||
CONFIG_SERIAL_8250_RUNTIME_UARTS=0
|
||||
CONFIG_SERIAL_OF_PLATFORM=y
|
||||
CONFIG_SERIAL_AMBA_PL011=y
|
||||
CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
|
||||
|
@ -354,7 +354,7 @@ static int emulate_stw(struct pt_regs *regs, int frreg, int flop)
|
||||
: "r" (val), "r" (regs->ior), "r" (regs->isr)
|
||||
: "r19", "r20", "r21", "r22", "r1", FIXUP_BRANCH_CLOBBER );
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
static int emulate_std(struct pt_regs *regs, int frreg, int flop)
|
||||
{
|
||||
@ -411,7 +411,7 @@ static int emulate_std(struct pt_regs *regs, int frreg, int flop)
|
||||
__asm__ __volatile__ (
|
||||
" mtsp %4, %%sr1\n"
|
||||
" zdep %2, 29, 2, %%r19\n"
|
||||
" dep %%r0, 31, 2, %2\n"
|
||||
" dep %%r0, 31, 2, %3\n"
|
||||
" mtsar %%r19\n"
|
||||
" zvdepi -2, 32, %%r19\n"
|
||||
"1: ldw 0(%%sr1,%3),%%r20\n"
|
||||
@ -423,7 +423,7 @@ static int emulate_std(struct pt_regs *regs, int frreg, int flop)
|
||||
" andcm %%r21, %%r19, %%r21\n"
|
||||
" or %1, %%r20, %1\n"
|
||||
" or %2, %%r21, %2\n"
|
||||
"3: stw %1,0(%%sr1,%1)\n"
|
||||
"3: stw %1,0(%%sr1,%3)\n"
|
||||
"4: stw %%r1,4(%%sr1,%3)\n"
|
||||
"5: stw %2,8(%%sr1,%3)\n"
|
||||
" copy %%r0, %0\n"
|
||||
@ -610,7 +610,6 @@ void handle_unaligned(struct pt_regs *regs)
|
||||
ret = ERR_NOTHANDLED; /* "undefined", but lets kill them. */
|
||||
break;
|
||||
}
|
||||
#ifdef CONFIG_PA20
|
||||
switch (regs->iir & OPCODE2_MASK)
|
||||
{
|
||||
case OPCODE_FLDD_L:
|
||||
@ -621,22 +620,23 @@ void handle_unaligned(struct pt_regs *regs)
|
||||
flop=1;
|
||||
ret = emulate_std(regs, R2(regs->iir),1);
|
||||
break;
|
||||
#ifdef CONFIG_PA20
|
||||
case OPCODE_LDD_L:
|
||||
ret = emulate_ldd(regs, R2(regs->iir),0);
|
||||
break;
|
||||
case OPCODE_STD_L:
|
||||
ret = emulate_std(regs, R2(regs->iir),0);
|
||||
break;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
switch (regs->iir & OPCODE3_MASK)
|
||||
{
|
||||
case OPCODE_FLDW_L:
|
||||
flop=1;
|
||||
ret = emulate_ldw(regs, R2(regs->iir),0);
|
||||
ret = emulate_ldw(regs, R2(regs->iir), 1);
|
||||
break;
|
||||
case OPCODE_LDW_M:
|
||||
ret = emulate_ldw(regs, R2(regs->iir),1);
|
||||
ret = emulate_ldw(regs, R2(regs->iir), 0);
|
||||
break;
|
||||
|
||||
case OPCODE_FSTW_L:
|
||||
|
@ -15,6 +15,7 @@ CFLAGS_prom_init.o += -fPIC
|
||||
CFLAGS_btext.o += -fPIC
|
||||
endif
|
||||
|
||||
CFLAGS_setup_32.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
|
||||
CFLAGS_cputable.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
|
||||
CFLAGS_prom_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
|
||||
CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
|
||||
|
@ -10,6 +10,9 @@ ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC)
|
||||
CFLAGS_REMOVE_code-patching.o = $(CC_FLAGS_FTRACE)
|
||||
CFLAGS_REMOVE_feature-fixups.o = $(CC_FLAGS_FTRACE)
|
||||
|
||||
CFLAGS_code-patching.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
|
||||
CFLAGS_feature-fixups.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
|
||||
|
||||
obj-y += string.o alloc.o code-patching.o feature-fixups.o
|
||||
|
||||
obj-$(CONFIG_PPC32) += div64.o copy_32.o crtsavres.o strlen_32.o
|
||||
|
@ -2681,12 +2681,14 @@ void emulate_update_regs(struct pt_regs *regs, struct instruction_op *op)
|
||||
case BARRIER_EIEIO:
|
||||
eieio();
|
||||
break;
|
||||
#ifdef CONFIG_PPC64
|
||||
case BARRIER_LWSYNC:
|
||||
asm volatile("lwsync" : : : "memory");
|
||||
break;
|
||||
case BARRIER_PTESYNC:
|
||||
asm volatile("ptesync" : : : "memory");
|
||||
break;
|
||||
#endif
|
||||
}
|
||||
break;
|
||||
|
||||
|
@ -20,6 +20,7 @@
|
||||
|
||||
static char local_guest[] = " ";
|
||||
static char all_guests[] = "* ";
|
||||
static char *all_groups = all_guests;
|
||||
static char *guest_query;
|
||||
|
||||
struct diag2fc_data {
|
||||
@ -62,10 +63,11 @@ static int diag2fc(int size, char* query, void *addr)
|
||||
|
||||
memcpy(parm_list.userid, query, NAME_LEN);
|
||||
ASCEBC(parm_list.userid, NAME_LEN);
|
||||
parm_list.addr = (unsigned long) addr ;
|
||||
memcpy(parm_list.aci_grp, all_groups, NAME_LEN);
|
||||
ASCEBC(parm_list.aci_grp, NAME_LEN);
|
||||
parm_list.addr = (unsigned long)addr;
|
||||
parm_list.size = size;
|
||||
parm_list.fmt = 0x02;
|
||||
memset(parm_list.aci_grp, 0x40, NAME_LEN);
|
||||
rc = -1;
|
||||
|
||||
diag_stat_inc(DIAG_STAT_X2FC);
|
||||
|
@ -268,7 +268,7 @@ CONFIG_INPUT_UINPUT=y
|
||||
CONFIG_SERIAL_8250=y
|
||||
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
|
||||
CONFIG_SERIAL_8250_CONSOLE=y
|
||||
# CONFIG_SERIAL_8250_EXAR is not set
|
||||
CONFIG_SERIAL_8250_RUNTIME_UARTS=0
|
||||
CONFIG_SERIAL_OF_PLATFORM=y
|
||||
CONFIG_SERIAL_MSM_GENI_EARLY_CONSOLE=y
|
||||
CONFIG_SERIAL_DEV_BUS=y
|
||||
|
@ -4,7 +4,7 @@
|
||||
|
||||
#include <uapi/asm/setup.h>
|
||||
|
||||
#define COMMAND_LINE_SIZE 2048
|
||||
#define COMMAND_LINE_SIZE 4096
|
||||
|
||||
#include <linux/linkage.h>
|
||||
#include <asm/page_types.h>
|
||||
|
@ -171,7 +171,7 @@ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)
|
||||
}
|
||||
|
||||
if (type == PERF_TYPE_RAW)
|
||||
config = eventsel & X86_RAW_EVENT_MASK;
|
||||
config = eventsel & AMD64_RAW_EVENT_MASK;
|
||||
|
||||
pmc_reprogram_counter(pmc, type, config,
|
||||
!(eventsel & ARCH_PERFMON_EVENTSEL_USR),
|
||||
|
@ -5413,6 +5413,8 @@ static void bfq_exit_queue(struct elevator_queue *e)
|
||||
spin_unlock_irq(&bfqd->lock);
|
||||
#endif
|
||||
|
||||
wbt_enable_default(bfqd->queue);
|
||||
|
||||
kfree(bfqd);
|
||||
}
|
||||
|
||||
|
@ -399,7 +399,7 @@ void bio_integrity_advance(struct bio *bio, unsigned int bytes_done)
|
||||
struct blk_integrity *bi = blk_get_integrity(bio->bi_disk);
|
||||
unsigned bytes = bio_integrity_bytes(bi, bytes_done >> 9);
|
||||
|
||||
bip->bip_iter.bi_sector += bytes_done >> 9;
|
||||
bip->bip_iter.bi_sector += bio_integrity_intervals(bi, bytes_done >> 9);
|
||||
bvec_iter_advance(bip->bip_vec, &bip->bip_iter, bytes);
|
||||
}
|
||||
EXPORT_SYMBOL(bio_integrity_advance);
|
||||
|
@ -877,8 +877,6 @@ void elv_unregister_queue(struct request_queue *q)
|
||||
kobject_uevent(&e->kobj, KOBJ_REMOVE);
|
||||
kobject_del(&e->kobj);
|
||||
e->registered = 0;
|
||||
/* Re-enable throttling in case elevator disabled it */
|
||||
wbt_enable_default(q);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -4613,6 +4613,7 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
|
||||
|
||||
/* devices that don't properly handle TRIM commands */
|
||||
{ "SuperSSpeed S238*", NULL, ATA_HORKAGE_NOTRIM, },
|
||||
{ "M88V29*", NULL, ATA_HORKAGE_NOTRIM, },
|
||||
|
||||
/*
|
||||
* As defined, the DRAT (Deterministic Read After Trim) and RZAT
|
||||
|
@ -916,6 +916,20 @@ static int hpt37x_init_one(struct pci_dev *dev, const struct pci_device_id *id)
|
||||
irqmask &= ~0x10;
|
||||
pci_write_config_byte(dev, 0x5a, irqmask);
|
||||
|
||||
/*
|
||||
* HPT371 chips physically have only one channel, the secondary one,
|
||||
* but the primary channel registers do exist! Go figure...
|
||||
* So, we manually disable the non-existing channel here
|
||||
* (if the BIOS hasn't done this already).
|
||||
*/
|
||||
if (dev->device == PCI_DEVICE_ID_TTI_HPT371) {
|
||||
u8 mcr1;
|
||||
|
||||
pci_read_config_byte(dev, 0x50, &mcr1);
|
||||
mcr1 &= ~0x04;
|
||||
pci_write_config_byte(dev, 0x50, mcr1);
|
||||
}
|
||||
|
||||
/*
|
||||
* default to pci clock. make sure MA15/16 are set to output
|
||||
* to prevent drives having problems with 40-pin cables. Needed
|
||||
|
@ -1817,7 +1817,9 @@ static int rcar_dmac_probe(struct platform_device *pdev)
|
||||
platform_set_drvdata(pdev, dmac);
|
||||
dmac->dev->dma_parms = &dmac->parms;
|
||||
dma_set_max_seg_size(dmac->dev, RCAR_DMATCR_MASK);
|
||||
dma_set_mask_and_coherent(dmac->dev, DMA_BIT_MASK(40));
|
||||
ret = dma_set_mask_and_coherent(dmac->dev, DMA_BIT_MASK(40));
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = rcar_dmac_parse_of(&pdev->dev, dmac);
|
||||
if (ret < 0)
|
||||
|
@ -366,7 +366,7 @@ static int altr_sdram_probe(struct platform_device *pdev)
|
||||
if (irq < 0) {
|
||||
edac_printk(KERN_ERR, EDAC_MC,
|
||||
"No irq %d in DT\n", irq);
|
||||
return -ENODEV;
|
||||
return irq;
|
||||
}
|
||||
|
||||
/* Arria10 has a 2nd IRQ */
|
||||
|
@ -265,7 +265,7 @@ void *edac_align_ptr(void **p, unsigned size, int n_elems)
|
||||
else
|
||||
return (char *)ptr;
|
||||
|
||||
r = (unsigned long)p % align;
|
||||
r = (unsigned long)ptr % align;
|
||||
|
||||
if (r == 0)
|
||||
return (char *)ptr;
|
||||
|
@ -1934,7 +1934,7 @@ static int xgene_edac_probe(struct platform_device *pdev)
|
||||
irq = platform_get_irq(pdev, i);
|
||||
if (irq < 0) {
|
||||
dev_err(&pdev->dev, "No IRQ resource\n");
|
||||
rc = -EINVAL;
|
||||
rc = irq;
|
||||
goto out_err;
|
||||
}
|
||||
rc = devm_request_irq(&pdev->dev, irq,
|
||||
|
@ -237,9 +237,12 @@ static int tegra186_gpio_of_xlate(struct gpio_chip *chip,
|
||||
return offset + pin;
|
||||
}
|
||||
|
||||
#define to_tegra_gpio(x) container_of((x), struct tegra_gpio, gpio)
|
||||
|
||||
static void tegra186_irq_ack(struct irq_data *data)
|
||||
{
|
||||
struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
|
||||
struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
|
||||
struct tegra_gpio *gpio = to_tegra_gpio(gc);
|
||||
void __iomem *base;
|
||||
|
||||
base = tegra186_gpio_get_base(gpio, data->hwirq);
|
||||
@ -251,7 +254,8 @@ static void tegra186_irq_ack(struct irq_data *data)
|
||||
|
||||
static void tegra186_irq_mask(struct irq_data *data)
|
||||
{
|
||||
struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
|
||||
struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
|
||||
struct tegra_gpio *gpio = to_tegra_gpio(gc);
|
||||
void __iomem *base;
|
||||
u32 value;
|
||||
|
||||
@ -266,7 +270,8 @@ static void tegra186_irq_mask(struct irq_data *data)
|
||||
|
||||
static void tegra186_irq_unmask(struct irq_data *data)
|
||||
{
|
||||
struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
|
||||
struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
|
||||
struct tegra_gpio *gpio = to_tegra_gpio(gc);
|
||||
void __iomem *base;
|
||||
u32 value;
|
||||
|
||||
@ -281,7 +286,8 @@ static void tegra186_irq_unmask(struct irq_data *data)
|
||||
|
||||
static int tegra186_irq_set_type(struct irq_data *data, unsigned int flow)
|
||||
{
|
||||
struct tegra_gpio *gpio = irq_data_get_irq_chip_data(data);
|
||||
struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
|
||||
struct tegra_gpio *gpio = to_tegra_gpio(gc);
|
||||
void __iomem *base;
|
||||
u32 value;
|
||||
|
||||
|
@ -4910,6 +4910,7 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
|
||||
if (!(edid->input & DRM_EDID_INPUT_DIGITAL))
|
||||
return quirks;
|
||||
|
||||
info->color_formats |= DRM_COLOR_FORMAT_RGB444;
|
||||
drm_parse_cea_ext(connector, edid);
|
||||
|
||||
/*
|
||||
@ -4963,7 +4964,6 @@ u32 drm_add_display_info(struct drm_connector *connector, const struct edid *edi
|
||||
DRM_DEBUG("%s: Assigning EDID-1.4 digital sink color depth as %d bpc.\n",
|
||||
connector->name, info->bpc);
|
||||
|
||||
info->color_formats |= DRM_COLOR_FORMAT_RGB444;
|
||||
if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB444)
|
||||
info->color_formats |= DRM_COLOR_FORMAT_YCRCB444;
|
||||
if (edid->features & DRM_EDID_FEATURE_RGB_YCRCB422)
|
||||
|
@ -444,8 +444,8 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (args->stream_size > SZ_64K || args->nr_relocs > SZ_64K ||
|
||||
args->nr_bos > SZ_64K || args->nr_pmrs > 128) {
|
||||
if (args->stream_size > SZ_128K || args->nr_relocs > SZ_128K ||
|
||||
args->nr_bos > SZ_128K || args->nr_pmrs > 128) {
|
||||
DRM_ERROR("submit arguments out of size limits\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -929,6 +929,9 @@ static int check_overlay_dst(struct intel_overlay *overlay,
|
||||
const struct intel_crtc_state *pipe_config =
|
||||
overlay->crtc->config;
|
||||
|
||||
if (rec->dst_height == 0 || rec->dst_width == 0)
|
||||
return -EINVAL;
|
||||
|
||||
if (rec->dst_x < pipe_config->pipe_src_w &&
|
||||
rec->dst_x + rec->dst_width <= pipe_config->pipe_src_w &&
|
||||
rec->dst_y < pipe_config->pipe_src_h &&
|
||||
|
@ -667,12 +667,14 @@ void __exit msm_dsi_phy_driver_unregister(void)
|
||||
int msm_dsi_phy_enable(struct msm_dsi_phy *phy, int src_pll_id,
|
||||
struct msm_dsi_phy_clk_request *clk_req)
|
||||
{
|
||||
struct device *dev = &phy->pdev->dev;
|
||||
struct device *dev;
|
||||
int ret;
|
||||
|
||||
if (!phy || !phy->cfg->ops.enable)
|
||||
return -EINVAL;
|
||||
|
||||
dev = &phy->pdev->dev;
|
||||
|
||||
ret = dsi_phy_enable_resource(phy);
|
||||
if (ret) {
|
||||
dev_err(dev, "%s: resource enable failed, %d\n",
|
||||
|
@ -390,7 +390,7 @@ static int msm_init_vram(struct drm_device *dev)
|
||||
of_node_put(node);
|
||||
if (ret)
|
||||
return ret;
|
||||
size = r.end - r.start;
|
||||
size = r.end - r.start + 1;
|
||||
DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start);
|
||||
|
||||
/* if we have no IOMMU, then we need to use carveout allocator.
|
||||
|
@ -38,7 +38,7 @@ nvbios_addr(struct nvkm_bios *bios, u32 *addr, u8 size)
|
||||
*addr += bios->imaged_addr;
|
||||
}
|
||||
|
||||
if (unlikely(*addr + size >= bios->size)) {
|
||||
if (unlikely(*addr + size > bios->size)) {
|
||||
nvkm_error(&bios->subdev, "OOB %d %08x %08x\n", size, p, *addr);
|
||||
return false;
|
||||
}
|
||||
|
@ -70,13 +70,20 @@ nvkm_pmu_fini(struct nvkm_subdev *subdev, bool suspend)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
static int
|
||||
nvkm_pmu_reset(struct nvkm_pmu *pmu)
|
||||
{
|
||||
struct nvkm_device *device = pmu->subdev.device;
|
||||
|
||||
if (!pmu->func->enabled(pmu))
|
||||
return;
|
||||
return 0;
|
||||
|
||||
/* Inhibit interrupts, and wait for idle. */
|
||||
nvkm_wr32(device, 0x10a014, 0x0000ffff);
|
||||
nvkm_msec(device, 2000,
|
||||
if (!nvkm_rd32(device, 0x10a04c))
|
||||
break;
|
||||
);
|
||||
|
||||
/* Reset. */
|
||||
if (pmu->func->reset)
|
||||
@ -87,37 +94,25 @@ nvkm_pmu_reset(struct nvkm_pmu *pmu)
|
||||
if (!(nvkm_rd32(device, 0x10a10c) & 0x00000006))
|
||||
break;
|
||||
);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
nvkm_pmu_preinit(struct nvkm_subdev *subdev)
|
||||
{
|
||||
struct nvkm_pmu *pmu = nvkm_pmu(subdev);
|
||||
nvkm_pmu_reset(pmu);
|
||||
return 0;
|
||||
return nvkm_pmu_reset(pmu);
|
||||
}
|
||||
|
||||
static int
|
||||
nvkm_pmu_init(struct nvkm_subdev *subdev)
|
||||
{
|
||||
struct nvkm_pmu *pmu = nvkm_pmu(subdev);
|
||||
struct nvkm_device *device = pmu->subdev.device;
|
||||
|
||||
if (!pmu->func->init)
|
||||
return 0;
|
||||
|
||||
if (pmu->func->enabled(pmu)) {
|
||||
/* Inhibit interrupts, and wait for idle. */
|
||||
nvkm_wr32(device, 0x10a014, 0x0000ffff);
|
||||
nvkm_msec(device, 2000,
|
||||
if (!nvkm_rd32(device, 0x10a04c))
|
||||
break;
|
||||
);
|
||||
|
||||
nvkm_pmu_reset(pmu);
|
||||
}
|
||||
|
||||
return pmu->func->init(pmu);
|
||||
int ret = nvkm_pmu_reset(pmu);
|
||||
if (ret == 0 && pmu->func->init)
|
||||
ret = pmu->func->init(pmu);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
|
@ -193,7 +193,8 @@ void radeon_atom_backlight_init(struct radeon_encoder *radeon_encoder,
|
||||
* so don't register a backlight device
|
||||
*/
|
||||
if ((rdev->pdev->subsystem_vendor == PCI_VENDOR_ID_APPLE) &&
|
||||
(rdev->pdev->device == 0x6741))
|
||||
(rdev->pdev->device == 0x6741) &&
|
||||
!dmi_match(DMI_PRODUCT_NAME, "iMac12,1"))
|
||||
return;
|
||||
|
||||
if (!radeon_encoder->enc_priv)
|
||||
|
@ -350,6 +350,7 @@ static struct vmbus_channel *alloc_channel(void)
|
||||
static void free_channel(struct vmbus_channel *channel)
|
||||
{
|
||||
tasklet_kill(&channel->callback_event);
|
||||
vmbus_remove_channel_attr_group(channel);
|
||||
|
||||
kobject_put(&channel->kobj);
|
||||
}
|
||||
|
@ -392,6 +392,8 @@ void vmbus_device_unregister(struct hv_device *device_obj);
|
||||
int vmbus_add_channel_kobj(struct hv_device *device_obj,
|
||||
struct vmbus_channel *channel);
|
||||
|
||||
void vmbus_remove_channel_attr_group(struct vmbus_channel *channel);
|
||||
|
||||
struct vmbus_channel *relid2channel(u32 relid);
|
||||
|
||||
void vmbus_free_channels(void);
|
||||
|
@ -609,7 +609,36 @@ static struct attribute *vmbus_dev_attrs[] = {
|
||||
&dev_attr_device.attr,
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(vmbus_dev);
|
||||
|
||||
/*
|
||||
* Device-level attribute_group callback function. Returns the permission for
|
||||
* each attribute, and returns 0 if an attribute is not visible.
|
||||
*/
|
||||
static umode_t vmbus_dev_attr_is_visible(struct kobject *kobj,
|
||||
struct attribute *attr, int idx)
|
||||
{
|
||||
struct device *dev = kobj_to_dev(kobj);
|
||||
const struct hv_device *hv_dev = device_to_hv_device(dev);
|
||||
|
||||
/* Hide the monitor attributes if the monitor mechanism is not used. */
|
||||
if (!hv_dev->channel->offermsg.monitor_allocated &&
|
||||
(attr == &dev_attr_monitor_id.attr ||
|
||||
attr == &dev_attr_server_monitor_pending.attr ||
|
||||
attr == &dev_attr_client_monitor_pending.attr ||
|
||||
attr == &dev_attr_server_monitor_latency.attr ||
|
||||
attr == &dev_attr_client_monitor_latency.attr ||
|
||||
attr == &dev_attr_server_monitor_conn_id.attr ||
|
||||
attr == &dev_attr_client_monitor_conn_id.attr))
|
||||
return 0;
|
||||
|
||||
return attr->mode;
|
||||
}
|
||||
|
||||
static const struct attribute_group vmbus_dev_group = {
|
||||
.attrs = vmbus_dev_attrs,
|
||||
.is_visible = vmbus_dev_attr_is_visible
|
||||
};
|
||||
__ATTRIBUTE_GROUPS(vmbus_dev);
|
||||
|
||||
/*
|
||||
* vmbus_uevent - add uevent for our device
|
||||
@ -1484,10 +1513,34 @@ static struct attribute *vmbus_chan_attrs[] = {
|
||||
NULL
|
||||
};
|
||||
|
||||
/*
|
||||
* Channel-level attribute_group callback function. Returns the permission for
|
||||
* each attribute, and returns 0 if an attribute is not visible.
|
||||
*/
|
||||
static umode_t vmbus_chan_attr_is_visible(struct kobject *kobj,
|
||||
struct attribute *attr, int idx)
|
||||
{
|
||||
const struct vmbus_channel *channel =
|
||||
container_of(kobj, struct vmbus_channel, kobj);
|
||||
|
||||
/* Hide the monitor attributes if the monitor mechanism is not used. */
|
||||
if (!channel->offermsg.monitor_allocated &&
|
||||
(attr == &chan_attr_pending.attr ||
|
||||
attr == &chan_attr_latency.attr ||
|
||||
attr == &chan_attr_monitor_id.attr))
|
||||
return 0;
|
||||
|
||||
return attr->mode;
|
||||
}
|
||||
|
||||
static struct attribute_group vmbus_chan_group = {
|
||||
.attrs = vmbus_chan_attrs,
|
||||
.is_visible = vmbus_chan_attr_is_visible
|
||||
};
|
||||
|
||||
static struct kobj_type vmbus_chan_ktype = {
|
||||
.sysfs_ops = &vmbus_chan_sysfs_ops,
|
||||
.release = vmbus_chan_release,
|
||||
.default_attrs = vmbus_chan_attrs,
|
||||
};
|
||||
|
||||
/*
|
||||
@ -1495,6 +1548,7 @@ static struct kobj_type vmbus_chan_ktype = {
|
||||
*/
|
||||
int vmbus_add_channel_kobj(struct hv_device *dev, struct vmbus_channel *channel)
|
||||
{
|
||||
const struct device *device = &dev->device;
|
||||
struct kobject *kobj = &channel->kobj;
|
||||
u32 relid = channel->offermsg.child_relid;
|
||||
int ret;
|
||||
@ -1502,14 +1556,36 @@ int vmbus_add_channel_kobj(struct hv_device *dev, struct vmbus_channel *channel)
|
||||
kobj->kset = dev->channels_kset;
|
||||
ret = kobject_init_and_add(kobj, &vmbus_chan_ktype, NULL,
|
||||
"%u", relid);
|
||||
if (ret)
|
||||
if (ret) {
|
||||
kobject_put(kobj);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = sysfs_create_group(kobj, &vmbus_chan_group);
|
||||
|
||||
if (ret) {
|
||||
/*
|
||||
* The calling functions' error handling paths will cleanup the
|
||||
* empty channel directory.
|
||||
*/
|
||||
kobject_put(kobj);
|
||||
dev_err(device, "Unable to set up channel sysfs files\n");
|
||||
return ret;
|
||||
}
|
||||
|
||||
kobject_uevent(kobj, KOBJ_ADD);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* vmbus_remove_channel_attr_group - remove the channel's attribute group
|
||||
*/
|
||||
void vmbus_remove_channel_attr_group(struct vmbus_channel *channel)
|
||||
{
|
||||
sysfs_remove_group(&channel->kobj, &vmbus_chan_group);
|
||||
}
|
||||
|
||||
/*
|
||||
* vmbus_device_create - Creates and registers a new child device
|
||||
* on the vmbus.
|
||||
|
@ -304,7 +304,7 @@ static int i8k_get_fan_nominal_speed(int fan, int speed)
|
||||
}
|
||||
|
||||
/*
|
||||
* Set the fan speed (off, low, high). Returns the new fan status.
|
||||
* Set the fan speed (off, low, high, ...).
|
||||
*/
|
||||
static int i8k_set_fan(int fan, int speed)
|
||||
{
|
||||
@ -316,7 +316,7 @@ static int i8k_set_fan(int fan, int speed)
|
||||
speed = (speed < 0) ? 0 : ((speed > i8k_fan_max) ? i8k_fan_max : speed);
|
||||
regs.ebx = (fan & 0xff) | (speed << 8);
|
||||
|
||||
return i8k_smm(®s) ? : i8k_get_fan_status(fan);
|
||||
return i8k_smm(®s);
|
||||
}
|
||||
|
||||
static int i8k_get_temp_type(int sensor)
|
||||
@ -430,7 +430,7 @@ static int
|
||||
i8k_ioctl_unlocked(struct file *fp, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
int val = 0;
|
||||
int speed;
|
||||
int speed, err;
|
||||
unsigned char buff[16];
|
||||
int __user *argp = (int __user *)arg;
|
||||
|
||||
@ -491,7 +491,11 @@ i8k_ioctl_unlocked(struct file *fp, unsigned int cmd, unsigned long arg)
|
||||
if (copy_from_user(&speed, argp + 1, sizeof(int)))
|
||||
return -EFAULT;
|
||||
|
||||
val = i8k_set_fan(val, speed);
|
||||
err = i8k_set_fan(val, speed);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
val = i8k_get_fan_status(val);
|
||||
break;
|
||||
|
||||
default:
|
||||
|
@ -359,7 +359,7 @@ static const struct lm90_params lm90_params[] = {
|
||||
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT
|
||||
| LM90_HAVE_BROKEN_ALERT,
|
||||
.alert_alarms = 0x7c,
|
||||
.max_convrate = 8,
|
||||
.max_convrate = 7,
|
||||
},
|
||||
[lm86] = {
|
||||
.flags = LM90_HAVE_OFFSET | LM90_HAVE_REM_LIMIT_EXT,
|
||||
|
@ -645,7 +645,7 @@ static int brcmstb_i2c_probe(struct platform_device *pdev)
|
||||
|
||||
/* set the data in/out register size for compatible SoCs */
|
||||
if (of_device_is_compatible(dev->device->of_node,
|
||||
"brcmstb,brcmper-i2c"))
|
||||
"brcm,brcmper-i2c"))
|
||||
dev->data_regsz = sizeof(u8);
|
||||
else
|
||||
dev->data_regsz = sizeof(u32);
|
||||
|
@ -106,6 +106,7 @@ static int men_z188_probe(struct mcb_device *dev,
|
||||
struct z188_adc *adc;
|
||||
struct iio_dev *indio_dev;
|
||||
struct resource *mem;
|
||||
int ret;
|
||||
|
||||
indio_dev = devm_iio_device_alloc(&dev->dev, sizeof(struct z188_adc));
|
||||
if (!indio_dev)
|
||||
@ -132,8 +133,14 @@ static int men_z188_probe(struct mcb_device *dev,
|
||||
adc->mem = mem;
|
||||
mcb_set_drvdata(dev, indio_dev);
|
||||
|
||||
return iio_device_register(indio_dev);
|
||||
ret = iio_device_register(indio_dev);
|
||||
if (ret)
|
||||
goto err_unmap;
|
||||
|
||||
return 0;
|
||||
|
||||
err_unmap:
|
||||
iounmap(adc->base);
|
||||
err:
|
||||
mcb_release_mem(mem);
|
||||
return -ENXIO;
|
||||
|
@ -3351,7 +3351,7 @@ static void mlx4_ib_event(struct mlx4_dev *dev, void *ibdev_ptr,
|
||||
case MLX4_DEV_EVENT_PORT_MGMT_CHANGE:
|
||||
ew = kmalloc(sizeof *ew, GFP_ATOMIC);
|
||||
if (!ew)
|
||||
break;
|
||||
return;
|
||||
|
||||
INIT_WORK(&ew->work, handle_port_mgmt_change_event);
|
||||
memcpy(&ew->ib_eqe, eqe, sizeof *eqe);
|
||||
|
@ -4154,9 +4154,11 @@ static void srp_remove_one(struct ib_device *device, void *client_data)
|
||||
spin_unlock(&host->target_lock);
|
||||
|
||||
/*
|
||||
* Wait for tl_err and target port removal tasks.
|
||||
* srp_queue_remove_work() queues a call to
|
||||
* srp_remove_target(). The latter function cancels
|
||||
* target->tl_err_work so waiting for the remove works to
|
||||
* finish is sufficient.
|
||||
*/
|
||||
flush_workqueue(system_long_wq);
|
||||
flush_workqueue(srp_remove_wq);
|
||||
|
||||
kfree(host);
|
||||
|
@ -30,6 +30,7 @@
|
||||
#include <linux/iommu.h>
|
||||
#include <linux/kmemleak.h>
|
||||
#include <linux/mem_encrypt.h>
|
||||
#include <linux/iopoll.h>
|
||||
#include <asm/pci-direct.h>
|
||||
#include <asm/iommu.h>
|
||||
#include <asm/gart.h>
|
||||
@ -772,6 +773,7 @@ static int iommu_ga_log_enable(struct amd_iommu *iommu)
|
||||
status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);
|
||||
if (status & (MMIO_STATUS_GALOG_RUN_MASK))
|
||||
break;
|
||||
udelay(10);
|
||||
}
|
||||
|
||||
if (i >= LOOP_TIMEOUT)
|
||||
|
@ -543,9 +543,8 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
|
||||
fn, &intel_ir_domain_ops,
|
||||
iommu);
|
||||
if (!iommu->ir_domain) {
|
||||
irq_domain_free_fwnode(fn);
|
||||
pr_err("IR%d: failed to allocate irqdomain\n", iommu->seq_id);
|
||||
goto out_free_bitmap;
|
||||
goto out_free_fwnode;
|
||||
}
|
||||
iommu->ir_msi_domain =
|
||||
arch_create_remap_msi_irq_domain(iommu->ir_domain,
|
||||
@ -569,7 +568,7 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
|
||||
|
||||
if (dmar_enable_qi(iommu)) {
|
||||
pr_err("Failed to enable queued invalidation\n");
|
||||
goto out_free_bitmap;
|
||||
goto out_free_ir_domain;
|
||||
}
|
||||
}
|
||||
|
||||
@ -593,6 +592,14 @@ static int intel_setup_irq_remapping(struct intel_iommu *iommu)
|
||||
|
||||
return 0;
|
||||
|
||||
out_free_ir_domain:
|
||||
if (iommu->ir_msi_domain)
|
||||
irq_domain_remove(iommu->ir_msi_domain);
|
||||
iommu->ir_msi_domain = NULL;
|
||||
irq_domain_remove(iommu->ir_domain);
|
||||
iommu->ir_domain = NULL;
|
||||
out_free_fwnode:
|
||||
irq_domain_free_fwnode(fn);
|
||||
out_free_bitmap:
|
||||
kfree(bitmap);
|
||||
out_free_pages:
|
||||
|
@ -258,3 +258,4 @@ out_iounmap:
|
||||
|
||||
IRQCHIP_DECLARE(sifive_plic, "sifive,plic-1.0.0", plic_init);
|
||||
IRQCHIP_DECLARE(riscv_plic0, "riscv,plic0", plic_init); /* for legacy systems */
|
||||
IRQCHIP_DECLARE(thead_c900_plic, "thead,c900-plic", plic_init); /* for firmware driver */
|
||||
|
@ -1737,32 +1737,32 @@ static void mmc_blk_read_single(struct mmc_queue *mq, struct request *req)
|
||||
struct mmc_card *card = mq->card;
|
||||
struct mmc_host *host = card->host;
|
||||
blk_status_t error = BLK_STS_OK;
|
||||
int retries = 0;
|
||||
|
||||
do {
|
||||
u32 status;
|
||||
int err;
|
||||
int retries = 0;
|
||||
|
||||
mmc_blk_rw_rq_prep(mqrq, card, 1, mq);
|
||||
while (retries++ <= MMC_READ_SINGLE_RETRIES) {
|
||||
mmc_blk_rw_rq_prep(mqrq, card, 1, mq);
|
||||
|
||||
mmc_wait_for_req(host, mrq);
|
||||
mmc_wait_for_req(host, mrq);
|
||||
|
||||
err = mmc_send_status(card, &status);
|
||||
if (err)
|
||||
goto error_exit;
|
||||
|
||||
if (!mmc_host_is_spi(host) &&
|
||||
!mmc_blk_in_tran_state(status)) {
|
||||
err = mmc_blk_fix_state(card, req);
|
||||
err = mmc_send_status(card, &status);
|
||||
if (err)
|
||||
goto error_exit;
|
||||
|
||||
if (!mmc_host_is_spi(host) &&
|
||||
!mmc_blk_in_tran_state(status)) {
|
||||
err = mmc_blk_fix_state(card, req);
|
||||
if (err)
|
||||
goto error_exit;
|
||||
}
|
||||
|
||||
if (!mrq->cmd->error)
|
||||
break;
|
||||
}
|
||||
|
||||
if (mrq->cmd->error && retries++ < MMC_READ_SINGLE_RETRIES)
|
||||
continue;
|
||||
|
||||
retries = 0;
|
||||
|
||||
if (mrq->cmd->error ||
|
||||
mrq->data->error ||
|
||||
(!mmc_host_is_spi(host) &&
|
||||
|
@ -696,12 +696,12 @@ static int moxart_remove(struct platform_device *pdev)
|
||||
if (!IS_ERR(host->dma_chan_rx))
|
||||
dma_release_channel(host->dma_chan_rx);
|
||||
mmc_remove_host(mmc);
|
||||
mmc_free_host(mmc);
|
||||
|
||||
writel(0, host->base + REG_INTERRUPT_MASK);
|
||||
writel(0, host->base + REG_POWER_CONTROL);
|
||||
writel(readl(host->base + REG_CLOCK_CONTROL) | CLK_OFF,
|
||||
host->base + REG_CLOCK_CONTROL);
|
||||
mmc_free_host(mmc);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
@ -472,12 +472,16 @@ static void esdhc_of_adma_workaround(struct sdhci_host *host, u32 intmask)
|
||||
|
||||
static int esdhc_of_enable_dma(struct sdhci_host *host)
|
||||
{
|
||||
int ret;
|
||||
u32 value;
|
||||
struct device *dev = mmc_dev(host->mmc);
|
||||
|
||||
if (of_device_is_compatible(dev->of_node, "fsl,ls1043a-esdhc") ||
|
||||
of_device_is_compatible(dev->of_node, "fsl,ls1046a-esdhc"))
|
||||
dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
|
||||
of_device_is_compatible(dev->of_node, "fsl,ls1046a-esdhc")) {
|
||||
ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
value = sdhci_readl(host, ESDHC_DMA_SYSCTL);
|
||||
|
||||
|
@ -589,6 +589,54 @@ static inline void brcmnand_write_fc(struct brcmnand_controller *ctrl,
|
||||
__raw_writel(val, ctrl->nand_fc + word * 4);
|
||||
}
|
||||
|
||||
static void brcmnand_clear_ecc_addr(struct brcmnand_controller *ctrl)
|
||||
{
|
||||
|
||||
/* Clear error addresses */
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_UNCORR_ADDR, 0);
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_CORR_ADDR, 0);
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_UNCORR_EXT_ADDR, 0);
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_CORR_EXT_ADDR, 0);
|
||||
}
|
||||
|
||||
static u64 brcmnand_get_uncorrecc_addr(struct brcmnand_controller *ctrl)
|
||||
{
|
||||
u64 err_addr;
|
||||
|
||||
err_addr = brcmnand_read_reg(ctrl, BRCMNAND_UNCORR_ADDR);
|
||||
err_addr |= ((u64)(brcmnand_read_reg(ctrl,
|
||||
BRCMNAND_UNCORR_EXT_ADDR)
|
||||
& 0xffff) << 32);
|
||||
|
||||
return err_addr;
|
||||
}
|
||||
|
||||
static u64 brcmnand_get_correcc_addr(struct brcmnand_controller *ctrl)
|
||||
{
|
||||
u64 err_addr;
|
||||
|
||||
err_addr = brcmnand_read_reg(ctrl, BRCMNAND_CORR_ADDR);
|
||||
err_addr |= ((u64)(brcmnand_read_reg(ctrl,
|
||||
BRCMNAND_CORR_EXT_ADDR)
|
||||
& 0xffff) << 32);
|
||||
|
||||
return err_addr;
|
||||
}
|
||||
|
||||
static void brcmnand_set_cmd_addr(struct mtd_info *mtd, u64 addr)
|
||||
{
|
||||
struct nand_chip *chip = mtd_to_nand(mtd);
|
||||
struct brcmnand_host *host = nand_get_controller_data(chip);
|
||||
struct brcmnand_controller *ctrl = host->ctrl;
|
||||
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_CMD_EXT_ADDRESS,
|
||||
(host->cs << 16) | ((addr >> 32) & 0xffff));
|
||||
(void)brcmnand_read_reg(ctrl, BRCMNAND_CMD_EXT_ADDRESS);
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_CMD_ADDRESS,
|
||||
lower_32_bits(addr));
|
||||
(void)brcmnand_read_reg(ctrl, BRCMNAND_CMD_ADDRESS);
|
||||
}
|
||||
|
||||
static inline u16 brcmnand_cs_offset(struct brcmnand_controller *ctrl, int cs,
|
||||
enum brcmnand_cs_reg reg)
|
||||
{
|
||||
@ -1217,9 +1265,12 @@ static void brcmnand_send_cmd(struct brcmnand_host *host, int cmd)
|
||||
{
|
||||
struct brcmnand_controller *ctrl = host->ctrl;
|
||||
int ret;
|
||||
u64 cmd_addr;
|
||||
|
||||
cmd_addr = brcmnand_read_reg(ctrl, BRCMNAND_CMD_ADDRESS);
|
||||
|
||||
dev_dbg(ctrl->dev, "send native cmd %d addr 0x%llx\n", cmd, cmd_addr);
|
||||
|
||||
dev_dbg(ctrl->dev, "send native cmd %d addr_lo 0x%x\n", cmd,
|
||||
brcmnand_read_reg(ctrl, BRCMNAND_CMD_ADDRESS));
|
||||
BUG_ON(ctrl->cmd_pending != 0);
|
||||
ctrl->cmd_pending = cmd;
|
||||
|
||||
@ -1380,12 +1431,7 @@ static void brcmnand_cmdfunc(struct mtd_info *mtd, unsigned command,
|
||||
if (!native_cmd)
|
||||
return;
|
||||
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_CMD_EXT_ADDRESS,
|
||||
(host->cs << 16) | ((addr >> 32) & 0xffff));
|
||||
(void)brcmnand_read_reg(ctrl, BRCMNAND_CMD_EXT_ADDRESS);
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_CMD_ADDRESS, lower_32_bits(addr));
|
||||
(void)brcmnand_read_reg(ctrl, BRCMNAND_CMD_ADDRESS);
|
||||
|
||||
brcmnand_set_cmd_addr(mtd, addr);
|
||||
brcmnand_send_cmd(host, native_cmd);
|
||||
brcmnand_waitfunc(mtd, chip);
|
||||
|
||||
@ -1605,20 +1651,10 @@ static int brcmnand_read_by_pio(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
struct brcmnand_controller *ctrl = host->ctrl;
|
||||
int i, j, ret = 0;
|
||||
|
||||
/* Clear error addresses */
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_UNCORR_ADDR, 0);
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_CORR_ADDR, 0);
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_UNCORR_EXT_ADDR, 0);
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_CORR_EXT_ADDR, 0);
|
||||
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_CMD_EXT_ADDRESS,
|
||||
(host->cs << 16) | ((addr >> 32) & 0xffff));
|
||||
(void)brcmnand_read_reg(ctrl, BRCMNAND_CMD_EXT_ADDRESS);
|
||||
brcmnand_clear_ecc_addr(ctrl);
|
||||
|
||||
for (i = 0; i < trans; i++, addr += FC_BYTES) {
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_CMD_ADDRESS,
|
||||
lower_32_bits(addr));
|
||||
(void)brcmnand_read_reg(ctrl, BRCMNAND_CMD_ADDRESS);
|
||||
brcmnand_set_cmd_addr(mtd, addr);
|
||||
/* SPARE_AREA_READ does not use ECC, so just use PAGE_READ */
|
||||
brcmnand_send_cmd(host, CMD_PAGE_READ);
|
||||
brcmnand_waitfunc(mtd, chip);
|
||||
@ -1637,22 +1673,16 @@ static int brcmnand_read_by_pio(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
mtd->oobsize / trans,
|
||||
host->hwcfg.sector_size_1k);
|
||||
|
||||
if (!ret) {
|
||||
*err_addr = brcmnand_read_reg(ctrl,
|
||||
BRCMNAND_UNCORR_ADDR) |
|
||||
((u64)(brcmnand_read_reg(ctrl,
|
||||
BRCMNAND_UNCORR_EXT_ADDR)
|
||||
& 0xffff) << 32);
|
||||
if (ret != -EBADMSG) {
|
||||
*err_addr = brcmnand_get_uncorrecc_addr(ctrl);
|
||||
|
||||
if (*err_addr)
|
||||
ret = -EBADMSG;
|
||||
}
|
||||
|
||||
if (!ret) {
|
||||
*err_addr = brcmnand_read_reg(ctrl,
|
||||
BRCMNAND_CORR_ADDR) |
|
||||
((u64)(brcmnand_read_reg(ctrl,
|
||||
BRCMNAND_CORR_EXT_ADDR)
|
||||
& 0xffff) << 32);
|
||||
*err_addr = brcmnand_get_correcc_addr(ctrl);
|
||||
|
||||
if (*err_addr)
|
||||
ret = -EUCLEAN;
|
||||
}
|
||||
@ -1722,7 +1752,7 @@ static int brcmnand_read(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
dev_dbg(ctrl->dev, "read %llx -> %p\n", (unsigned long long)addr, buf);
|
||||
|
||||
try_dmaread:
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_UNCORR_COUNT, 0);
|
||||
brcmnand_clear_ecc_addr(ctrl);
|
||||
|
||||
if (has_flash_dma(ctrl) && !oob && flash_dma_buf_ok(buf)) {
|
||||
err = brcmnand_dma_trans(host, addr, buf, trans * FC_BYTES,
|
||||
@ -1866,15 +1896,9 @@ static int brcmnand_write(struct mtd_info *mtd, struct nand_chip *chip,
|
||||
goto out;
|
||||
}
|
||||
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_CMD_EXT_ADDRESS,
|
||||
(host->cs << 16) | ((addr >> 32) & 0xffff));
|
||||
(void)brcmnand_read_reg(ctrl, BRCMNAND_CMD_EXT_ADDRESS);
|
||||
|
||||
for (i = 0; i < trans; i++, addr += FC_BYTES) {
|
||||
/* full address MUST be set before populating FC */
|
||||
brcmnand_write_reg(ctrl, BRCMNAND_CMD_ADDRESS,
|
||||
lower_32_bits(addr));
|
||||
(void)brcmnand_read_reg(ctrl, BRCMNAND_CMD_ADDRESS);
|
||||
brcmnand_set_cmd_addr(mtd, addr);
|
||||
|
||||
if (buf) {
|
||||
brcmnand_soc_data_bus_prepare(ctrl->soc, false);
|
||||
|
@ -10,7 +10,6 @@
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/bitops.h>
|
||||
@ -2959,10 +2958,6 @@ static int qcom_nandc_probe(struct platform_device *pdev)
|
||||
if (!nandc->base_dma)
|
||||
return -ENXIO;
|
||||
|
||||
ret = qcom_nandc_alloc(nandc);
|
||||
if (ret)
|
||||
goto err_nandc_alloc;
|
||||
|
||||
ret = clk_prepare_enable(nandc->core_clk);
|
||||
if (ret)
|
||||
goto err_core_clk;
|
||||
@ -2971,6 +2966,10 @@ static int qcom_nandc_probe(struct platform_device *pdev)
|
||||
if (ret)
|
||||
goto err_aon_clk;
|
||||
|
||||
ret = qcom_nandc_alloc(nandc);
|
||||
if (ret)
|
||||
goto err_nandc_alloc;
|
||||
|
||||
ret = qcom_nandc_setup(nandc);
|
||||
if (ret)
|
||||
goto err_setup;
|
||||
@ -2982,15 +2981,14 @@ static int qcom_nandc_probe(struct platform_device *pdev)
|
||||
return 0;
|
||||
|
||||
err_setup:
|
||||
qcom_nandc_unalloc(nandc);
|
||||
err_nandc_alloc:
|
||||
clk_disable_unprepare(nandc->aon_clk);
|
||||
err_aon_clk:
|
||||
clk_disable_unprepare(nandc->core_clk);
|
||||
err_core_clk:
|
||||
qcom_nandc_unalloc(nandc);
|
||||
err_nandc_alloc:
|
||||
dma_unmap_resource(dev, res->start, resource_size(res),
|
||||
DMA_BIDIRECTIONAL, 0);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -249,7 +249,7 @@ static inline int __check_agg_selection_timer(struct port *port)
|
||||
if (bond == NULL)
|
||||
return 0;
|
||||
|
||||
return BOND_AD_INFO(bond).agg_select_timer ? 1 : 0;
|
||||
return atomic_read(&BOND_AD_INFO(bond).agg_select_timer) ? 1 : 0;
|
||||
}
|
||||
|
||||
/**
|
||||
@ -1012,8 +1012,8 @@ static void ad_mux_machine(struct port *port, bool *update_slave_arr)
|
||||
if (port->aggregator &&
|
||||
port->aggregator->is_active &&
|
||||
!__port_is_enabled(port)) {
|
||||
|
||||
__enable_port(port);
|
||||
*update_slave_arr = true;
|
||||
}
|
||||
}
|
||||
break;
|
||||
@ -1760,6 +1760,7 @@ static void ad_agg_selection_logic(struct aggregator *agg,
|
||||
port = port->next_port_in_aggregator) {
|
||||
__enable_port(port);
|
||||
}
|
||||
*update_slave_arr = true;
|
||||
}
|
||||
}
|
||||
|
||||
@ -1964,7 +1965,7 @@ static void ad_marker_response_received(struct bond_marker *marker,
|
||||
*/
|
||||
void bond_3ad_initiate_agg_selection(struct bonding *bond, int timeout)
|
||||
{
|
||||
BOND_AD_INFO(bond).agg_select_timer = timeout;
|
||||
atomic_set(&BOND_AD_INFO(bond).agg_select_timer, timeout);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -2248,6 +2249,28 @@ void bond_3ad_update_ad_actor_settings(struct bonding *bond)
|
||||
spin_unlock_bh(&bond->mode_lock);
|
||||
}
|
||||
|
||||
/**
|
||||
* bond_agg_timer_advance - advance agg_select_timer
|
||||
* @bond: bonding structure
|
||||
*
|
||||
* Return true when agg_select_timer reaches 0.
|
||||
*/
|
||||
static bool bond_agg_timer_advance(struct bonding *bond)
|
||||
{
|
||||
int val, nval;
|
||||
|
||||
while (1) {
|
||||
val = atomic_read(&BOND_AD_INFO(bond).agg_select_timer);
|
||||
if (!val)
|
||||
return false;
|
||||
nval = val - 1;
|
||||
if (atomic_cmpxchg(&BOND_AD_INFO(bond).agg_select_timer,
|
||||
val, nval) == val)
|
||||
break;
|
||||
}
|
||||
return nval == 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* bond_3ad_state_machine_handler - handle state machines timeout
|
||||
* @bond: bonding struct to work on
|
||||
@ -2283,9 +2306,7 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
|
||||
if (!bond_has_slaves(bond))
|
||||
goto re_arm;
|
||||
|
||||
/* check if agg_select_timer timer after initialize is timed out */
|
||||
if (BOND_AD_INFO(bond).agg_select_timer &&
|
||||
!(--BOND_AD_INFO(bond).agg_select_timer)) {
|
||||
if (bond_agg_timer_advance(bond)) {
|
||||
slave = bond_first_slave_rcu(bond);
|
||||
port = slave ? &(SLAVE_AD_INFO(slave)->port) : NULL;
|
||||
|
||||
|
@ -1307,7 +1307,7 @@ static int lan9303_probe_reset_gpio(struct lan9303 *chip,
|
||||
struct device_node *np)
|
||||
{
|
||||
chip->reset_gpio = devm_gpiod_get_optional(chip->dev, "reset",
|
||||
GPIOD_OUT_LOW);
|
||||
GPIOD_OUT_HIGH);
|
||||
if (IS_ERR(chip->reset_gpio))
|
||||
return PTR_ERR(chip->reset_gpio);
|
||||
|
||||
|
@ -722,7 +722,9 @@ static void xgbe_stop_timers(struct xgbe_prv_data *pdata)
|
||||
if (!channel->tx_ring)
|
||||
break;
|
||||
|
||||
/* Deactivate the Tx timer */
|
||||
del_timer_sync(&channel->tx_timer);
|
||||
channel->tx_timer_active = 0;
|
||||
}
|
||||
}
|
||||
|
||||
@ -2766,6 +2768,14 @@ read_again:
|
||||
buf2_len = xgbe_rx_buf2_len(rdata, packet, len);
|
||||
len += buf2_len;
|
||||
|
||||
if (buf2_len > rdata->rx.buf.dma_len) {
|
||||
/* Hardware inconsistency within the descriptors
|
||||
* that has resulted in a length underflow.
|
||||
*/
|
||||
error = 1;
|
||||
goto skip_data;
|
||||
}
|
||||
|
||||
if (!skb) {
|
||||
skb = xgbe_create_skb(pdata, napi, rdata,
|
||||
buf1_len);
|
||||
@ -2795,8 +2805,10 @@ skip_data:
|
||||
if (!last || context_next)
|
||||
goto read_again;
|
||||
|
||||
if (!skb)
|
||||
if (!skb || error) {
|
||||
dev_kfree_skb(skb);
|
||||
goto next_packet;
|
||||
}
|
||||
|
||||
/* Be sure we don't exceed the configured MTU */
|
||||
max_len = netdev->mtu + ETH_HLEN;
|
||||
|
@ -418,6 +418,9 @@ static void xgbe_pci_remove(struct pci_dev *pdev)
|
||||
|
||||
pci_free_irq_vectors(pdata->pcidev);
|
||||
|
||||
/* Disable all interrupts in the hardware */
|
||||
XP_IOWRITE(pdata, XP_INT_EN, 0x0);
|
||||
|
||||
xgbe_free_pdata(pdata);
|
||||
}
|
||||
|
||||
|
@ -4073,7 +4073,7 @@ static int macb_probe(struct platform_device *pdev)
|
||||
|
||||
#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
|
||||
if (GEM_BFEXT(DAW64, gem_readl(bp, DCFG6))) {
|
||||
dma_set_mask(&pdev->dev, DMA_BIT_MASK(44));
|
||||
dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(44));
|
||||
bp->hw_dma_cap |= HW_DMA_CAP_64B;
|
||||
}
|
||||
#endif
|
||||
|
@ -3044,11 +3044,25 @@ static void ibmvnic_send_req_caps(struct ibmvnic_adapter *adapter, int retry)
|
||||
struct device *dev = &adapter->vdev->dev;
|
||||
union ibmvnic_crq crq;
|
||||
int max_entries;
|
||||
int cap_reqs;
|
||||
|
||||
/* We send out 6 or 7 REQUEST_CAPABILITY CRQs below (depending on
|
||||
* the PROMISC flag). Initialize this count upfront. When the tasklet
|
||||
* receives a response to all of these, it will send the next protocol
|
||||
* message (QUERY_IP_OFFLOAD).
|
||||
*/
|
||||
if (!(adapter->netdev->flags & IFF_PROMISC) ||
|
||||
adapter->promisc_supported)
|
||||
cap_reqs = 7;
|
||||
else
|
||||
cap_reqs = 6;
|
||||
|
||||
if (!retry) {
|
||||
/* Sub-CRQ entries are 32 byte long */
|
||||
int entries_page = 4 * PAGE_SIZE / (sizeof(u64) * 4);
|
||||
|
||||
atomic_set(&adapter->running_cap_crqs, cap_reqs);
|
||||
|
||||
if (adapter->min_tx_entries_per_subcrq > entries_page ||
|
||||
adapter->min_rx_add_entries_per_subcrq > entries_page) {
|
||||
dev_err(dev, "Fatal, invalid entries per sub-crq\n");
|
||||
@ -3109,44 +3123,45 @@ static void ibmvnic_send_req_caps(struct ibmvnic_adapter *adapter, int retry)
|
||||
adapter->opt_rx_comp_queues;
|
||||
|
||||
adapter->req_rx_add_queues = adapter->max_rx_add_queues;
|
||||
} else {
|
||||
atomic_add(cap_reqs, &adapter->running_cap_crqs);
|
||||
}
|
||||
|
||||
memset(&crq, 0, sizeof(crq));
|
||||
crq.request_capability.first = IBMVNIC_CRQ_CMD;
|
||||
crq.request_capability.cmd = REQUEST_CAPABILITY;
|
||||
|
||||
crq.request_capability.capability = cpu_to_be16(REQ_TX_QUEUES);
|
||||
crq.request_capability.number = cpu_to_be64(adapter->req_tx_queues);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
|
||||
crq.request_capability.capability = cpu_to_be16(REQ_RX_QUEUES);
|
||||
crq.request_capability.number = cpu_to_be64(adapter->req_rx_queues);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
|
||||
crq.request_capability.capability = cpu_to_be16(REQ_RX_ADD_QUEUES);
|
||||
crq.request_capability.number = cpu_to_be64(adapter->req_rx_add_queues);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
|
||||
crq.request_capability.capability =
|
||||
cpu_to_be16(REQ_TX_ENTRIES_PER_SUBCRQ);
|
||||
crq.request_capability.number =
|
||||
cpu_to_be64(adapter->req_tx_entries_per_subcrq);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
|
||||
crq.request_capability.capability =
|
||||
cpu_to_be16(REQ_RX_ADD_ENTRIES_PER_SUBCRQ);
|
||||
crq.request_capability.number =
|
||||
cpu_to_be64(adapter->req_rx_add_entries_per_subcrq);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
|
||||
crq.request_capability.capability = cpu_to_be16(REQ_MTU);
|
||||
crq.request_capability.number = cpu_to_be64(adapter->req_mtu);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
|
||||
if (adapter->netdev->flags & IFF_PROMISC) {
|
||||
@ -3154,16 +3169,21 @@ static void ibmvnic_send_req_caps(struct ibmvnic_adapter *adapter, int retry)
|
||||
crq.request_capability.capability =
|
||||
cpu_to_be16(PROMISC_REQUESTED);
|
||||
crq.request_capability.number = cpu_to_be64(1);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
}
|
||||
} else {
|
||||
crq.request_capability.capability =
|
||||
cpu_to_be16(PROMISC_REQUESTED);
|
||||
crq.request_capability.number = cpu_to_be64(0);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
cap_reqs--;
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
}
|
||||
|
||||
/* Keep at end to catch any discrepancy between expected and actual
|
||||
* CRQs sent.
|
||||
*/
|
||||
WARN_ON(cap_reqs != 0);
|
||||
}
|
||||
|
||||
static int pending_scrq(struct ibmvnic_adapter *adapter,
|
||||
@ -3568,118 +3588,132 @@ static void send_map_query(struct ibmvnic_adapter *adapter)
|
||||
static void send_cap_queries(struct ibmvnic_adapter *adapter)
|
||||
{
|
||||
union ibmvnic_crq crq;
|
||||
int cap_reqs;
|
||||
|
||||
/* We send out 25 QUERY_CAPABILITY CRQs below. Initialize this count
|
||||
* upfront. When the tasklet receives a response to all of these, it
|
||||
* can send out the next protocol messaage (REQUEST_CAPABILITY).
|
||||
*/
|
||||
cap_reqs = 25;
|
||||
|
||||
atomic_set(&adapter->running_cap_crqs, cap_reqs);
|
||||
|
||||
atomic_set(&adapter->running_cap_crqs, 0);
|
||||
memset(&crq, 0, sizeof(crq));
|
||||
crq.query_capability.first = IBMVNIC_CRQ_CMD;
|
||||
crq.query_capability.cmd = QUERY_CAPABILITY;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MIN_TX_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MIN_RX_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MIN_RX_ADD_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MAX_TX_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MAX_RX_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MAX_RX_ADD_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(MIN_TX_ENTRIES_PER_SUBCRQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(MIN_RX_ADD_ENTRIES_PER_SUBCRQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(MAX_TX_ENTRIES_PER_SUBCRQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(MAX_RX_ADD_ENTRIES_PER_SUBCRQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(TCP_IP_OFFLOAD);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(PROMISC_SUPPORTED);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MIN_MTU);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MAX_MTU);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MAX_MULTICAST_FILTERS);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(VLAN_HEADER_INSERTION);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(RX_VLAN_HEADER_INSERTION);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(MAX_TX_SG_ENTRIES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(RX_SG_SUPPORTED);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(OPT_TX_COMP_SUB_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(OPT_RX_COMP_QUEUES);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(OPT_RX_BUFADD_Q_PER_RX_COMP_Q);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(OPT_TX_ENTRIES_PER_SUBCRQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability =
|
||||
cpu_to_be16(OPT_RXBA_ENTRIES_PER_SUBCRQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
crq.query_capability.capability = cpu_to_be16(TX_RX_DESC_REQ);
|
||||
atomic_inc(&adapter->running_cap_crqs);
|
||||
|
||||
ibmvnic_send_crq(adapter, &crq);
|
||||
cap_reqs--;
|
||||
|
||||
/* Keep at end to catch any discrepancy between expected and actual
|
||||
* CRQs sent.
|
||||
*/
|
||||
WARN_ON(cap_reqs != 0);
|
||||
}
|
||||
|
||||
static void handle_vpd_size_rsp(union ibmvnic_crq *crq,
|
||||
@ -3923,6 +3957,8 @@ static void handle_request_cap_rsp(union ibmvnic_crq *crq,
|
||||
char *name;
|
||||
|
||||
atomic_dec(&adapter->running_cap_crqs);
|
||||
netdev_dbg(adapter->netdev, "Outstanding request-caps: %d\n",
|
||||
atomic_read(&adapter->running_cap_crqs));
|
||||
switch (be16_to_cpu(crq->request_capability_rsp.capability)) {
|
||||
case REQ_TX_QUEUES:
|
||||
req_value = &adapter->req_tx_queues;
|
||||
@ -4457,12 +4493,6 @@ static void ibmvnic_tasklet(void *data)
|
||||
ibmvnic_handle_crq(crq, adapter);
|
||||
crq->generic.first = 0;
|
||||
}
|
||||
|
||||
/* remain in tasklet until all
|
||||
* capabilities responses are received
|
||||
*/
|
||||
if (!adapter->wait_capability)
|
||||
done = true;
|
||||
}
|
||||
/* if capabilities CRQ's were sent in this tasklet, the following
|
||||
* tasklet must wait until all responses are received
|
||||
|
@ -179,7 +179,6 @@ enum i40e_interrupt_policy {
|
||||
|
||||
struct i40e_lump_tracking {
|
||||
u16 num_entries;
|
||||
u16 search_hint;
|
||||
u16 list[0];
|
||||
#define I40E_PILE_VALID_BIT 0x8000
|
||||
#define I40E_IWARP_IRQ_PILE_ID (I40E_PILE_VALID_BIT - 2)
|
||||
@ -709,12 +708,12 @@ struct i40e_vsi {
|
||||
struct rtnl_link_stats64 net_stats_offsets;
|
||||
struct i40e_eth_stats eth_stats;
|
||||
struct i40e_eth_stats eth_stats_offsets;
|
||||
u32 tx_restart;
|
||||
u32 tx_busy;
|
||||
u64 tx_restart;
|
||||
u64 tx_busy;
|
||||
u64 tx_linearize;
|
||||
u64 tx_force_wb;
|
||||
u32 rx_buf_failed;
|
||||
u32 rx_page_failed;
|
||||
u64 rx_buf_failed;
|
||||
u64 rx_page_failed;
|
||||
|
||||
/* These are containers of ring pointers, allocated at run-time */
|
||||
struct i40e_ring **rx_rings;
|
||||
|
@ -236,7 +236,7 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
|
||||
(unsigned long int)vsi->net_stats_offsets.rx_compressed,
|
||||
(unsigned long int)vsi->net_stats_offsets.tx_compressed);
|
||||
dev_info(&pf->pdev->dev,
|
||||
" tx_restart = %d, tx_busy = %d, rx_buf_failed = %d, rx_page_failed = %d\n",
|
||||
" tx_restart = %llu, tx_busy = %llu, rx_buf_failed = %llu, rx_page_failed = %llu\n",
|
||||
vsi->tx_restart, vsi->tx_busy,
|
||||
vsi->rx_buf_failed, vsi->rx_page_failed);
|
||||
rcu_read_lock();
|
||||
|
@ -193,10 +193,6 @@ int i40e_free_virt_mem_d(struct i40e_hw *hw, struct i40e_virt_mem *mem)
|
||||
* @id: an owner id to stick on the items assigned
|
||||
*
|
||||
* Returns the base item index of the lump, or negative for error
|
||||
*
|
||||
* The search_hint trick and lack of advanced fit-finding only work
|
||||
* because we're highly likely to have all the same size lump requests.
|
||||
* Linear search time and any fragmentation should be minimal.
|
||||
**/
|
||||
static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
|
||||
u16 needed, u16 id)
|
||||
@ -211,8 +207,21 @@ static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* start the linear search with an imperfect hint */
|
||||
i = pile->search_hint;
|
||||
/* Allocate last queue in the pile for FDIR VSI queue
|
||||
* so it doesn't fragment the qp_pile
|
||||
*/
|
||||
if (pile == pf->qp_pile && pf->vsi[id]->type == I40E_VSI_FDIR) {
|
||||
if (pile->list[pile->num_entries - 1] & I40E_PILE_VALID_BIT) {
|
||||
dev_err(&pf->pdev->dev,
|
||||
"Cannot allocate queue %d for I40E_VSI_FDIR\n",
|
||||
pile->num_entries - 1);
|
||||
return -ENOMEM;
|
||||
}
|
||||
pile->list[pile->num_entries - 1] = id | I40E_PILE_VALID_BIT;
|
||||
return pile->num_entries - 1;
|
||||
}
|
||||
|
||||
i = 0;
|
||||
while (i < pile->num_entries) {
|
||||
/* skip already allocated entries */
|
||||
if (pile->list[i] & I40E_PILE_VALID_BIT) {
|
||||
@ -231,7 +240,6 @@ static int i40e_get_lump(struct i40e_pf *pf, struct i40e_lump_tracking *pile,
|
||||
for (j = 0; j < needed; j++)
|
||||
pile->list[i+j] = id | I40E_PILE_VALID_BIT;
|
||||
ret = i;
|
||||
pile->search_hint = i + j;
|
||||
break;
|
||||
}
|
||||
|
||||
@ -254,7 +262,7 @@ static int i40e_put_lump(struct i40e_lump_tracking *pile, u16 index, u16 id)
|
||||
{
|
||||
int valid_id = (id | I40E_PILE_VALID_BIT);
|
||||
int count = 0;
|
||||
int i;
|
||||
u16 i;
|
||||
|
||||
if (!pile || index >= pile->num_entries)
|
||||
return -EINVAL;
|
||||
@ -266,8 +274,6 @@ static int i40e_put_lump(struct i40e_lump_tracking *pile, u16 index, u16 id)
|
||||
count++;
|
||||
}
|
||||
|
||||
if (count && index < pile->search_hint)
|
||||
pile->search_hint = index;
|
||||
|
||||
return count;
|
||||
}
|
||||
@ -785,9 +791,9 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi)
|
||||
struct rtnl_link_stats64 *ns; /* netdev stats */
|
||||
struct i40e_eth_stats *oes;
|
||||
struct i40e_eth_stats *es; /* device's eth stats */
|
||||
u32 tx_restart, tx_busy;
|
||||
u64 tx_restart, tx_busy;
|
||||
struct i40e_ring *p;
|
||||
u32 rx_page, rx_buf;
|
||||
u64 rx_page, rx_buf;
|
||||
u64 bytes, packets;
|
||||
unsigned int start;
|
||||
u64 tx_linearize;
|
||||
@ -9486,15 +9492,9 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
|
||||
}
|
||||
i40e_get_oem_version(&pf->hw);
|
||||
|
||||
if (test_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state) &&
|
||||
((hw->aq.fw_maj_ver == 4 && hw->aq.fw_min_ver <= 33) ||
|
||||
hw->aq.fw_maj_ver < 4) && hw->mac.type == I40E_MAC_XL710) {
|
||||
/* The following delay is necessary for 4.33 firmware and older
|
||||
* to recover after EMP reset. 200 ms should suffice but we
|
||||
* put here 300 ms to be sure that FW is ready to operate
|
||||
* after reset.
|
||||
*/
|
||||
mdelay(300);
|
||||
if (test_and_clear_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state)) {
|
||||
/* The following delay is necessary for firmware update. */
|
||||
mdelay(1000);
|
||||
}
|
||||
|
||||
/* re-verify the eeprom if we just had an EMP reset */
|
||||
@ -10733,7 +10733,6 @@ static int i40e_init_interrupt_scheme(struct i40e_pf *pf)
|
||||
return -ENOMEM;
|
||||
|
||||
pf->irq_pile->num_entries = vectors;
|
||||
pf->irq_pile->search_hint = 0;
|
||||
|
||||
/* track first vector for misc interrupts, ignore return */
|
||||
(void)i40e_get_lump(pf, pf->irq_pile, 1, I40E_PILE_VALID_BIT - 1);
|
||||
@ -11442,7 +11441,6 @@ static int i40e_sw_init(struct i40e_pf *pf)
|
||||
goto sw_init_done;
|
||||
}
|
||||
pf->qp_pile->num_entries = pf->hw.func_caps.num_tx_qp;
|
||||
pf->qp_pile->search_hint = 0;
|
||||
|
||||
pf->tx_timeout_recovery_level = 1;
|
||||
|
||||
|
@ -2338,6 +2338,59 @@ error_param:
|
||||
aq_ret);
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_check_enough_queue - find big enough queue number
|
||||
* @vf: pointer to the VF info
|
||||
* @needed: the number of items needed
|
||||
*
|
||||
* Returns the base item index of the queue, or negative for error
|
||||
**/
|
||||
static int i40e_check_enough_queue(struct i40e_vf *vf, u16 needed)
|
||||
{
|
||||
unsigned int i, cur_queues, more, pool_size;
|
||||
struct i40e_lump_tracking *pile;
|
||||
struct i40e_pf *pf = vf->pf;
|
||||
struct i40e_vsi *vsi;
|
||||
|
||||
vsi = pf->vsi[vf->lan_vsi_idx];
|
||||
cur_queues = vsi->alloc_queue_pairs;
|
||||
|
||||
/* if current allocated queues are enough for need */
|
||||
if (cur_queues >= needed)
|
||||
return vsi->base_queue;
|
||||
|
||||
pile = pf->qp_pile;
|
||||
if (cur_queues > 0) {
|
||||
/* if the allocated queues are not zero
|
||||
* just check if there are enough queues for more
|
||||
* behind the allocated queues.
|
||||
*/
|
||||
more = needed - cur_queues;
|
||||
for (i = vsi->base_queue + cur_queues;
|
||||
i < pile->num_entries; i++) {
|
||||
if (pile->list[i] & I40E_PILE_VALID_BIT)
|
||||
break;
|
||||
|
||||
if (more-- == 1)
|
||||
/* there is enough */
|
||||
return vsi->base_queue;
|
||||
}
|
||||
}
|
||||
|
||||
pool_size = 0;
|
||||
for (i = 0; i < pile->num_entries; i++) {
|
||||
if (pile->list[i] & I40E_PILE_VALID_BIT) {
|
||||
pool_size = 0;
|
||||
continue;
|
||||
}
|
||||
if (needed <= ++pool_size)
|
||||
/* there is enough */
|
||||
return i;
|
||||
}
|
||||
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
/**
|
||||
* i40e_vc_request_queues_msg
|
||||
* @vf: pointer to the VF info
|
||||
@ -2377,6 +2430,12 @@ static int i40e_vc_request_queues_msg(struct i40e_vf *vf, u8 *msg, int msglen)
|
||||
req_pairs - cur_pairs,
|
||||
pf->queues_left);
|
||||
vfres->num_queue_pairs = pf->queues_left + cur_pairs;
|
||||
} else if (i40e_check_enough_queue(vf, req_pairs) < 0) {
|
||||
dev_warn(&pf->pdev->dev,
|
||||
"VF %d requested %d more queues, but there is not enough for it.\n",
|
||||
vf->vf_id,
|
||||
req_pairs - cur_pairs);
|
||||
vfres->num_queue_pairs = cur_pairs;
|
||||
} else {
|
||||
/* successful request */
|
||||
vf->num_req_queues = req_pairs;
|
||||
|
@ -1964,14 +1964,15 @@ static void ixgbevf_set_rx_buffer_len(struct ixgbevf_adapter *adapter,
|
||||
if (adapter->flags & IXGBEVF_FLAGS_LEGACY_RX)
|
||||
return;
|
||||
|
||||
if (PAGE_SIZE < 8192)
|
||||
if (max_frame > IXGBEVF_MAX_FRAME_BUILD_SKB)
|
||||
set_ring_uses_large_buffer(rx_ring);
|
||||
|
||||
/* 82599 can't rely on RXDCTL.RLPML to restrict the size of the frame */
|
||||
if (adapter->hw.mac.type == ixgbe_mac_82599_vf && !ring_uses_large_buffer(rx_ring))
|
||||
return;
|
||||
|
||||
set_ring_build_skb_enabled(rx_ring);
|
||||
|
||||
if (PAGE_SIZE < 8192) {
|
||||
if (max_frame <= IXGBEVF_MAX_FRAME_BUILD_SKB)
|
||||
return;
|
||||
|
||||
set_ring_uses_large_buffer(rx_ring);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -1357,7 +1357,7 @@ static int mlx5e_get_module_eeprom(struct net_device *netdev,
|
||||
if (size_read < 0) {
|
||||
netdev_err(priv->netdev, "%s: mlx5_query_eeprom failed:0x%x\n",
|
||||
__func__, size_read);
|
||||
return 0;
|
||||
return size_read;
|
||||
}
|
||||
|
||||
i += size_read;
|
||||
|
@ -712,7 +712,7 @@ static int sun8i_dwmac_reset(struct stmmac_priv *priv)
|
||||
|
||||
if (err) {
|
||||
dev_err(priv->device, "EMAC reset timeout\n");
|
||||
return -EFAULT;
|
||||
return err;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
@ -159,15 +159,20 @@ static int adjust_systime(void __iomem *ioaddr, u32 sec, u32 nsec,
|
||||
|
||||
static void get_systime(void __iomem *ioaddr, u64 *systime)
|
||||
{
|
||||
u64 ns;
|
||||
u64 ns, sec0, sec1;
|
||||
|
||||
/* Get the TSSS value */
|
||||
ns = readl(ioaddr + PTP_STNSR);
|
||||
/* Get the TSS and convert sec time value to nanosecond */
|
||||
ns += readl(ioaddr + PTP_STSR) * 1000000000ULL;
|
||||
/* Get the TSS value */
|
||||
sec1 = readl_relaxed(ioaddr + PTP_STSR);
|
||||
do {
|
||||
sec0 = sec1;
|
||||
/* Get the TSSS value */
|
||||
ns = readl_relaxed(ioaddr + PTP_STNSR);
|
||||
/* Get the TSS value */
|
||||
sec1 = readl_relaxed(ioaddr + PTP_STSR);
|
||||
} while (sec0 != sec1);
|
||||
|
||||
if (systime)
|
||||
*systime = ns;
|
||||
*systime = ns + (sec1 * 1000000000ULL);
|
||||
}
|
||||
|
||||
const struct stmmac_hwtimestamp stmmac_ptp = {
|
||||
|
@ -966,9 +966,7 @@ static int yam_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
|
||||
sizeof(struct yamdrv_ioctl_mcs));
|
||||
if (IS_ERR(ym))
|
||||
return PTR_ERR(ym);
|
||||
if (ym->cmd != SIOCYAMSMCS)
|
||||
return -EINVAL;
|
||||
if (ym->bitrate > YAM_MAXBITRATE) {
|
||||
if (ym->cmd != SIOCYAMSMCS || ym->bitrate > YAM_MAXBITRATE) {
|
||||
kfree(ym);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -108,6 +108,7 @@ struct at86rf230_local {
|
||||
unsigned long cal_timeout;
|
||||
bool is_tx;
|
||||
bool is_tx_from_off;
|
||||
bool was_tx;
|
||||
u8 tx_retry;
|
||||
struct sk_buff *tx_skb;
|
||||
struct at86rf230_state_change tx;
|
||||
@ -351,7 +352,11 @@ at86rf230_async_error_recover_complete(void *context)
|
||||
if (ctx->free)
|
||||
kfree(ctx);
|
||||
|
||||
ieee802154_wake_queue(lp->hw);
|
||||
if (lp->was_tx) {
|
||||
lp->was_tx = 0;
|
||||
dev_kfree_skb_any(lp->tx_skb);
|
||||
ieee802154_wake_queue(lp->hw);
|
||||
}
|
||||
}
|
||||
|
||||
static void
|
||||
@ -360,7 +365,11 @@ at86rf230_async_error_recover(void *context)
|
||||
struct at86rf230_state_change *ctx = context;
|
||||
struct at86rf230_local *lp = ctx->lp;
|
||||
|
||||
lp->is_tx = 0;
|
||||
if (lp->is_tx) {
|
||||
lp->was_tx = 1;
|
||||
lp->is_tx = 0;
|
||||
}
|
||||
|
||||
at86rf230_async_state_change(lp, ctx, STATE_RX_AACK_ON,
|
||||
at86rf230_async_error_recover_complete);
|
||||
}
|
||||
|
@ -1769,6 +1769,7 @@ static int ca8210_async_xmit_complete(
|
||||
status
|
||||
);
|
||||
if (status != MAC_TRANSACTION_OVERFLOW) {
|
||||
dev_kfree_skb_any(priv->tx_skb);
|
||||
ieee802154_wake_queue(priv->hw);
|
||||
return 0;
|
||||
}
|
||||
@ -2974,8 +2975,8 @@ static void ca8210_hw_setup(struct ieee802154_hw *ca8210_hw)
|
||||
ca8210_hw->phy->cca.opt = NL802154_CCA_OPT_ENERGY_CARRIER_AND;
|
||||
ca8210_hw->phy->cca_ed_level = -9800;
|
||||
ca8210_hw->phy->symbol_duration = 16;
|
||||
ca8210_hw->phy->lifs_period = 40;
|
||||
ca8210_hw->phy->sifs_period = 12;
|
||||
ca8210_hw->phy->lifs_period = 40 * ca8210_hw->phy->symbol_duration;
|
||||
ca8210_hw->phy->sifs_period = 12 * ca8210_hw->phy->symbol_duration;
|
||||
ca8210_hw->flags =
|
||||
IEEE802154_HW_AFILT |
|
||||
IEEE802154_HW_OMIT_CKSUM |
|
||||
|
@ -805,6 +805,7 @@ static int hwsim_add_one(struct genl_info *info, struct device *dev,
|
||||
goto err_pib;
|
||||
}
|
||||
|
||||
pib->channel = 13;
|
||||
rcu_assign_pointer(phy->pib, pib);
|
||||
phy->idx = idx;
|
||||
INIT_LIST_HEAD(&phy->edges);
|
||||
|
@ -1005,8 +1005,8 @@ static void mcr20a_hw_setup(struct mcr20a_local *lp)
|
||||
dev_dbg(printdev(lp), "%s\n", __func__);
|
||||
|
||||
phy->symbol_duration = 16;
|
||||
phy->lifs_period = 40;
|
||||
phy->sifs_period = 12;
|
||||
phy->lifs_period = 40 * phy->symbol_duration;
|
||||
phy->sifs_period = 12 * phy->symbol_duration;
|
||||
|
||||
hw->flags = IEEE802154_HW_TX_OMIT_CKSUM |
|
||||
IEEE802154_HW_AFILT |
|
||||
|
@ -3259,6 +3259,15 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
|
||||
|
||||
macsec->real_dev = real_dev;
|
||||
|
||||
/* send_sci must be set to true when transmit sci explicitly is set */
|
||||
if ((data && data[IFLA_MACSEC_SCI]) &&
|
||||
(data && data[IFLA_MACSEC_INC_SCI])) {
|
||||
u8 send_sci = !!nla_get_u8(data[IFLA_MACSEC_INC_SCI]);
|
||||
|
||||
if (!send_sci)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (data && data[IFLA_MACSEC_ICV_LEN])
|
||||
icv_len = nla_get_u8(data[IFLA_MACSEC_ICV_LEN]);
|
||||
mtu = real_dev->mtu - icv_len - macsec_extra_len(true);
|
||||
|
@ -899,16 +899,15 @@ static int m88e1118_config_aneg(struct phy_device *phydev)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = genphy_soft_reset(phydev);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
err = marvell_set_polarity(phydev, phydev->mdix_ctrl);
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
err = genphy_config_aneg(phydev);
|
||||
return 0;
|
||||
if (err < 0)
|
||||
return err;
|
||||
|
||||
return genphy_soft_reset(phydev);
|
||||
}
|
||||
|
||||
static int m88e1118_config_init(struct phy_device *phydev)
|
||||
|
@ -1166,6 +1166,9 @@ void phy_detach(struct phy_device *phydev)
|
||||
phydev->mdio.dev.driver == &genphy_driver.mdiodrv.driver)
|
||||
device_release_driver(&phydev->mdio.dev);
|
||||
|
||||
/* Assert the reset signal */
|
||||
phy_device_reset(phydev, 1);
|
||||
|
||||
/*
|
||||
* The phydev might go away on the put_device() below, so avoid
|
||||
* a use-after-free bug by reading the underlying bus first.
|
||||
@ -1175,9 +1178,6 @@ void phy_detach(struct phy_device *phydev)
|
||||
put_device(&phydev->mdio.dev);
|
||||
if (ndev_owner != bus->owner)
|
||||
module_put(bus->owner);
|
||||
|
||||
/* Assert the reset signal */
|
||||
phy_device_reset(phydev, 1);
|
||||
}
|
||||
EXPORT_SYMBOL(phy_detach);
|
||||
|
||||
|
@ -554,6 +554,11 @@ static int phylink_register_sfp(struct phylink *pl,
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (!fwnode_device_is_available(ref.fwnode)) {
|
||||
fwnode_handle_put(ref.fwnode);
|
||||
return 0;
|
||||
}
|
||||
|
||||
pl->sfp_bus = sfp_register_upstream(ref.fwnode, pl->netdev, pl,
|
||||
&sfp_phylink_ops);
|
||||
if (!pl->sfp_bus)
|
||||
|
@ -1373,59 +1373,69 @@ static int ax88179_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
|
||||
u16 hdr_off;
|
||||
u32 *pkt_hdr;
|
||||
|
||||
/* This check is no longer done by usbnet */
|
||||
if (skb->len < dev->net->hard_header_len)
|
||||
/* At the end of the SKB, there's a header telling us how many packets
|
||||
* are bundled into this buffer and where we can find an array of
|
||||
* per-packet metadata (which contains elements encoded into u16).
|
||||
*/
|
||||
if (skb->len < 4)
|
||||
return 0;
|
||||
|
||||
skb_trim(skb, skb->len - 4);
|
||||
memcpy(&rx_hdr, skb_tail_pointer(skb), 4);
|
||||
le32_to_cpus(&rx_hdr);
|
||||
|
||||
pkt_cnt = (u16)rx_hdr;
|
||||
hdr_off = (u16)(rx_hdr >> 16);
|
||||
|
||||
if (pkt_cnt == 0)
|
||||
return 0;
|
||||
|
||||
/* Make sure that the bounds of the metadata array are inside the SKB
|
||||
* (and in front of the counter at the end).
|
||||
*/
|
||||
if (pkt_cnt * 2 + hdr_off > skb->len)
|
||||
return 0;
|
||||
pkt_hdr = (u32 *)(skb->data + hdr_off);
|
||||
|
||||
while (pkt_cnt--) {
|
||||
/* Packets must not overlap the metadata array */
|
||||
skb_trim(skb, hdr_off);
|
||||
|
||||
for (; ; pkt_cnt--, pkt_hdr++) {
|
||||
u16 pkt_len;
|
||||
|
||||
le32_to_cpus(pkt_hdr);
|
||||
pkt_len = (*pkt_hdr >> 16) & 0x1fff;
|
||||
|
||||
if (pkt_len > skb->len)
|
||||
return 0;
|
||||
|
||||
/* Check CRC or runt packet */
|
||||
if ((*pkt_hdr & AX_RXHDR_CRC_ERR) ||
|
||||
(*pkt_hdr & AX_RXHDR_DROP_ERR)) {
|
||||
skb_pull(skb, (pkt_len + 7) & 0xFFF8);
|
||||
pkt_hdr++;
|
||||
continue;
|
||||
}
|
||||
if (((*pkt_hdr & (AX_RXHDR_CRC_ERR | AX_RXHDR_DROP_ERR)) == 0) &&
|
||||
pkt_len >= 2 + ETH_HLEN) {
|
||||
bool last = (pkt_cnt == 0);
|
||||
|
||||
if (pkt_cnt == 0) {
|
||||
skb->len = pkt_len;
|
||||
/* Skip IP alignment pseudo header */
|
||||
skb_pull(skb, 2);
|
||||
skb_set_tail_pointer(skb, skb->len);
|
||||
skb->truesize = pkt_len + sizeof(struct sk_buff);
|
||||
ax88179_rx_checksum(skb, pkt_hdr);
|
||||
return 1;
|
||||
}
|
||||
|
||||
ax_skb = skb_clone(skb, GFP_ATOMIC);
|
||||
if (ax_skb) {
|
||||
if (last) {
|
||||
ax_skb = skb;
|
||||
} else {
|
||||
ax_skb = skb_clone(skb, GFP_ATOMIC);
|
||||
if (!ax_skb)
|
||||
return 0;
|
||||
}
|
||||
ax_skb->len = pkt_len;
|
||||
/* Skip IP alignment pseudo header */
|
||||
skb_pull(ax_skb, 2);
|
||||
skb_set_tail_pointer(ax_skb, ax_skb->len);
|
||||
ax_skb->truesize = pkt_len + sizeof(struct sk_buff);
|
||||
ax88179_rx_checksum(ax_skb, pkt_hdr);
|
||||
|
||||
if (last)
|
||||
return 1;
|
||||
|
||||
usbnet_skb_return(dev, ax_skb);
|
||||
} else {
|
||||
return 0;
|
||||
}
|
||||
|
||||
skb_pull(skb, (pkt_len + 7) & 0xFFF8);
|
||||
pkt_hdr++;
|
||||
/* Trim this packet away from the SKB */
|
||||
if (!skb_pull(skb, (pkt_len + 7) & 0xFFF8))
|
||||
return 0;
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
static struct sk_buff *
|
||||
|
@ -584,6 +584,11 @@ static const struct usb_device_id products[] = {
|
||||
.bInterfaceSubClass = USB_CDC_SUBCLASS_ETHERNET, \
|
||||
.bInterfaceProtocol = USB_CDC_PROTO_NONE
|
||||
|
||||
#define ZAURUS_FAKE_INTERFACE \
|
||||
.bInterfaceClass = USB_CLASS_COMM, \
|
||||
.bInterfaceSubClass = USB_CDC_SUBCLASS_MDLM, \
|
||||
.bInterfaceProtocol = USB_CDC_PROTO_NONE
|
||||
|
||||
/* SA-1100 based Sharp Zaurus ("collie"), or compatible;
|
||||
* wire-incompatible with true CDC Ethernet implementations.
|
||||
* (And, it seems, needlessly so...)
|
||||
@ -637,6 +642,13 @@ static const struct usb_device_id products[] = {
|
||||
.idProduct = 0x9032, /* SL-6000 */
|
||||
ZAURUS_MASTER_INTERFACE,
|
||||
.driver_info = 0,
|
||||
}, {
|
||||
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
|
||||
| USB_DEVICE_ID_MATCH_DEVICE,
|
||||
.idVendor = 0x04DD,
|
||||
.idProduct = 0x9032, /* SL-6000 */
|
||||
ZAURUS_FAKE_INTERFACE,
|
||||
.driver_info = 0,
|
||||
}, {
|
||||
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
|
||||
| USB_DEVICE_ID_MATCH_DEVICE,
|
||||
|
@ -173,7 +173,7 @@ static int ipheth_alloc_urbs(struct ipheth_device *iphone)
|
||||
if (tx_buf == NULL)
|
||||
goto free_rx_urb;
|
||||
|
||||
rx_buf = usb_alloc_coherent(iphone->udev, IPHETH_BUF_SIZE,
|
||||
rx_buf = usb_alloc_coherent(iphone->udev, IPHETH_BUF_SIZE + IPHETH_IP_ALIGN,
|
||||
GFP_KERNEL, &rx_urb->transfer_dma);
|
||||
if (rx_buf == NULL)
|
||||
goto free_tx_buf;
|
||||
@ -198,7 +198,7 @@ error_nomem:
|
||||
|
||||
static void ipheth_free_urbs(struct ipheth_device *iphone)
|
||||
{
|
||||
usb_free_coherent(iphone->udev, IPHETH_BUF_SIZE, iphone->rx_buf,
|
||||
usb_free_coherent(iphone->udev, IPHETH_BUF_SIZE + IPHETH_IP_ALIGN, iphone->rx_buf,
|
||||
iphone->rx_urb->transfer_dma);
|
||||
usb_free_coherent(iphone->udev, IPHETH_BUF_SIZE, iphone->tx_buf,
|
||||
iphone->tx_urb->transfer_dma);
|
||||
@ -371,7 +371,7 @@ static int ipheth_rx_submit(struct ipheth_device *dev, gfp_t mem_flags)
|
||||
|
||||
usb_fill_bulk_urb(dev->rx_urb, udev,
|
||||
usb_rcvbulkpipe(udev, dev->bulk_in),
|
||||
dev->rx_buf, IPHETH_BUF_SIZE,
|
||||
dev->rx_buf, IPHETH_BUF_SIZE + IPHETH_IP_ALIGN,
|
||||
ipheth_rcvbulk_callback,
|
||||
dev);
|
||||
dev->rx_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
|
||||
|
@ -1358,6 +1358,8 @@ static const struct usb_device_id products[] = {
|
||||
{QMI_FIXED_INTF(0x413c, 0x81d7, 0)}, /* Dell Wireless 5821e */
|
||||
{QMI_FIXED_INTF(0x413c, 0x81d7, 1)}, /* Dell Wireless 5821e preproduction config */
|
||||
{QMI_FIXED_INTF(0x413c, 0x81e0, 0)}, /* Dell Wireless 5821e with eSIM support*/
|
||||
{QMI_FIXED_INTF(0x413c, 0x81e4, 0)}, /* Dell Wireless 5829e with eSIM support*/
|
||||
{QMI_FIXED_INTF(0x413c, 0x81e6, 0)}, /* Dell Wireless 5829e */
|
||||
{QMI_FIXED_INTF(0x03f0, 0x4e1d, 8)}, /* HP lt4111 LTE/EV-DO/HSPA+ Gobi 4G Module */
|
||||
{QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)}, /* HP lt4120 Snapdragon X5 LTE */
|
||||
{QMI_FIXED_INTF(0x22de, 0x9061, 3)}, /* WeTelecom WPD-600N */
|
||||
|
@ -410,7 +410,7 @@ static int sr9700_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
|
||||
/* ignore the CRC length */
|
||||
len = (skb->data[1] | (skb->data[2] << 8)) - 4;
|
||||
|
||||
if (len > ETH_FRAME_LEN)
|
||||
if (len > ETH_FRAME_LEN || len > skb->len)
|
||||
return 0;
|
||||
|
||||
/* the last packet of current skb */
|
||||
|
@ -268,6 +268,11 @@ static const struct usb_device_id products [] = {
|
||||
.bInterfaceSubClass = USB_CDC_SUBCLASS_ETHERNET, \
|
||||
.bInterfaceProtocol = USB_CDC_PROTO_NONE
|
||||
|
||||
#define ZAURUS_FAKE_INTERFACE \
|
||||
.bInterfaceClass = USB_CLASS_COMM, \
|
||||
.bInterfaceSubClass = USB_CDC_SUBCLASS_MDLM, \
|
||||
.bInterfaceProtocol = USB_CDC_PROTO_NONE
|
||||
|
||||
/* SA-1100 based Sharp Zaurus ("collie"), or compatible. */
|
||||
{
|
||||
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
|
||||
@ -325,6 +330,13 @@ static const struct usb_device_id products [] = {
|
||||
.idProduct = 0x9032, /* SL-6000 */
|
||||
ZAURUS_MASTER_INTERFACE,
|
||||
.driver_info = ZAURUS_PXA_INFO,
|
||||
}, {
|
||||
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
|
||||
| USB_DEVICE_ID_MATCH_DEVICE,
|
||||
.idVendor = 0x04DD,
|
||||
.idProduct = 0x9032, /* SL-6000 */
|
||||
ZAURUS_FAKE_INTERFACE,
|
||||
.driver_info = (unsigned long)&bogus_mdlm_info,
|
||||
}, {
|
||||
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
|
||||
| USB_DEVICE_ID_MATCH_DEVICE,
|
||||
|
@ -152,9 +152,10 @@ static void __veth_xdp_flush(struct veth_rq *rq)
|
||||
{
|
||||
/* Write ptr_ring before reading rx_notify_masked */
|
||||
smp_mb();
|
||||
if (!rq->rx_notify_masked) {
|
||||
rq->rx_notify_masked = true;
|
||||
napi_schedule(&rq->xdp_napi);
|
||||
if (!READ_ONCE(rq->rx_notify_masked) &&
|
||||
napi_schedule_prep(&rq->xdp_napi)) {
|
||||
WRITE_ONCE(rq->rx_notify_masked, true);
|
||||
__napi_schedule(&rq->xdp_napi);
|
||||
}
|
||||
}
|
||||
|
||||
@ -623,8 +624,10 @@ static int veth_poll(struct napi_struct *napi, int budget)
|
||||
/* Write rx_notify_masked before reading ptr_ring */
|
||||
smp_store_mb(rq->rx_notify_masked, false);
|
||||
if (unlikely(!__ptr_ring_empty(&rq->xdp_ring))) {
|
||||
rq->rx_notify_masked = true;
|
||||
napi_schedule(&rq->xdp_napi);
|
||||
if (napi_schedule_prep(&rq->xdp_napi)) {
|
||||
WRITE_ONCE(rq->rx_notify_masked, true);
|
||||
__napi_schedule(&rq->xdp_napi);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1549,6 +1549,8 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
|
||||
out_unbind:
|
||||
complete(&drv->request_firmware_complete);
|
||||
device_release_driver(drv->trans->dev);
|
||||
/* drv has just been freed by the release */
|
||||
failure = false;
|
||||
free:
|
||||
if (failure)
|
||||
iwl_dealloc_ucode(drv);
|
||||
|
@ -310,8 +310,7 @@ int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans,
|
||||
/* This may fail if AMT took ownership of the device */
|
||||
if (iwl_pcie_prepare_card_hw(trans)) {
|
||||
IWL_WARN(trans, "Exit HW not ready\n");
|
||||
ret = -EIO;
|
||||
goto out;
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
iwl_enable_rfkill_int(trans);
|
||||
|
@ -1363,8 +1363,7 @@ static int iwl_trans_pcie_start_fw(struct iwl_trans *trans,
|
||||
/* This may fail if AMT took ownership of the device */
|
||||
if (iwl_pcie_prepare_card_hw(trans)) {
|
||||
IWL_WARN(trans, "Exit HW not ready\n");
|
||||
ret = -EIO;
|
||||
goto out;
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
iwl_enable_rfkill_int(trans);
|
||||
|
@ -3466,7 +3466,14 @@ static void nvme_async_event_work(struct work_struct *work)
|
||||
container_of(work, struct nvme_ctrl, async_event_work);
|
||||
|
||||
nvme_aen_uevent(ctrl);
|
||||
ctrl->ops->submit_async_event(ctrl);
|
||||
|
||||
/*
|
||||
* The transport drivers must guarantee AER submission here is safe by
|
||||
* flushing ctrl async_event_work after changing the controller state
|
||||
* from LIVE and before freeing the admin queue.
|
||||
*/
|
||||
if (ctrl->state == NVME_CTRL_LIVE)
|
||||
ctrl->ops->submit_async_event(ctrl);
|
||||
}
|
||||
|
||||
static bool nvme_ctrl_pp_status(struct nvme_ctrl *ctrl)
|
||||
|
@ -1050,6 +1050,7 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
|
||||
struct nvme_rdma_ctrl, err_work);
|
||||
|
||||
nvme_stop_keep_alive(&ctrl->ctrl);
|
||||
flush_work(&ctrl->ctrl.async_event_work);
|
||||
nvme_rdma_teardown_io_queues(ctrl, false);
|
||||
nvme_start_queues(&ctrl->ctrl);
|
||||
nvme_rdma_teardown_admin_queue(ctrl, false);
|
||||
|
@ -1010,7 +1010,7 @@ ccio_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
|
||||
ioc->usg_calls++;
|
||||
#endif
|
||||
|
||||
while(sg_dma_len(sglist) && nents--) {
|
||||
while (nents && sg_dma_len(sglist)) {
|
||||
|
||||
#ifdef CCIO_COLLECT_STATS
|
||||
ioc->usg_pages += sg_dma_len(sglist) >> PAGE_SHIFT;
|
||||
@ -1018,6 +1018,7 @@ ccio_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
|
||||
ccio_unmap_page(dev, sg_dma_address(sglist),
|
||||
sg_dma_len(sglist), direction, 0);
|
||||
++sglist;
|
||||
nents--;
|
||||
}
|
||||
|
||||
DBG_RUN_SG("%s() DONE (nents %d)\n", __func__, nents);
|
||||
|
@ -1063,7 +1063,7 @@ sba_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
|
||||
spin_unlock_irqrestore(&ioc->res_lock, flags);
|
||||
#endif
|
||||
|
||||
while (sg_dma_len(sglist) && nents--) {
|
||||
while (nents && sg_dma_len(sglist)) {
|
||||
|
||||
sba_unmap_page(dev, sg_dma_address(sglist), sg_dma_len(sglist),
|
||||
direction, 0);
|
||||
@ -1072,6 +1072,7 @@ sba_unmap_sg(struct device *dev, struct scatterlist *sglist, int nents,
|
||||
ioc->usingle_calls--; /* kluge since call is unmap_sg() */
|
||||
#endif
|
||||
++sglist;
|
||||
nents--;
|
||||
}
|
||||
|
||||
DBG_RUN_SG("%s() DONE (nents %d)\n", __func__, nents);
|
||||
|
@ -92,7 +92,7 @@ static int rpmsg_eptdev_destroy(struct device *dev, void *data)
|
||||
/* wake up any blocked readers */
|
||||
wake_up_interruptible(&eptdev->readq);
|
||||
|
||||
device_del(&eptdev->dev);
|
||||
cdev_device_del(&eptdev->cdev, &eptdev->dev);
|
||||
put_device(&eptdev->dev);
|
||||
|
||||
return 0;
|
||||
@ -329,7 +329,6 @@ static void rpmsg_eptdev_release_device(struct device *dev)
|
||||
|
||||
ida_simple_remove(&rpmsg_ept_ida, dev->id);
|
||||
ida_simple_remove(&rpmsg_minor_ida, MINOR(eptdev->dev.devt));
|
||||
cdev_del(&eptdev->cdev);
|
||||
kfree(eptdev);
|
||||
}
|
||||
|
||||
@ -374,19 +373,13 @@ static int rpmsg_eptdev_create(struct rpmsg_ctrldev *ctrldev,
|
||||
dev->id = ret;
|
||||
dev_set_name(dev, "rpmsg%d", ret);
|
||||
|
||||
ret = cdev_add(&eptdev->cdev, dev->devt, 1);
|
||||
ret = cdev_device_add(&eptdev->cdev, &eptdev->dev);
|
||||
if (ret)
|
||||
goto free_ept_ida;
|
||||
|
||||
/* We can now rely on the release function for cleanup */
|
||||
dev->release = rpmsg_eptdev_release_device;
|
||||
|
||||
ret = device_add(dev);
|
||||
if (ret) {
|
||||
dev_err(dev, "device_add failed: %d\n", ret);
|
||||
put_device(dev);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
||||
free_ept_ida:
|
||||
@ -455,7 +448,6 @@ static void rpmsg_ctrldev_release_device(struct device *dev)
|
||||
|
||||
ida_simple_remove(&rpmsg_ctrl_ida, dev->id);
|
||||
ida_simple_remove(&rpmsg_minor_ida, MINOR(dev->devt));
|
||||
cdev_del(&ctrldev->cdev);
|
||||
kfree(ctrldev);
|
||||
}
|
||||
|
||||
@ -490,19 +482,13 @@ static int rpmsg_chrdev_probe(struct rpmsg_device *rpdev)
|
||||
dev->id = ret;
|
||||
dev_set_name(&ctrldev->dev, "rpmsg_ctrl%d", ret);
|
||||
|
||||
ret = cdev_add(&ctrldev->cdev, dev->devt, 1);
|
||||
ret = cdev_device_add(&ctrldev->cdev, &ctrldev->dev);
|
||||
if (ret)
|
||||
goto free_ctrl_ida;
|
||||
|
||||
/* We can now rely on the release function for cleanup */
|
||||
dev->release = rpmsg_ctrldev_release_device;
|
||||
|
||||
ret = device_add(dev);
|
||||
if (ret) {
|
||||
dev_err(&rpdev->dev, "device_add failed: %d\n", ret);
|
||||
put_device(dev);
|
||||
}
|
||||
|
||||
dev_set_drvdata(&rpdev->dev, ctrldev);
|
||||
|
||||
return ret;
|
||||
@ -528,7 +514,7 @@ static void rpmsg_chrdev_remove(struct rpmsg_device *rpdev)
|
||||
if (ret)
|
||||
dev_warn(&rpdev->dev, "failed to nuke endpoints: %d\n", ret);
|
||||
|
||||
device_del(&ctrldev->dev);
|
||||
cdev_device_del(&ctrldev->cdev, &ctrldev->dev);
|
||||
put_device(&ctrldev->dev);
|
||||
}
|
||||
|
||||
|
@ -82,7 +82,7 @@ unsigned int mc146818_get_time(struct rtc_time *time)
|
||||
time->tm_year += real_year - 72;
|
||||
#endif
|
||||
|
||||
if (century > 20)
|
||||
if (century > 19)
|
||||
time->tm_year += (century - 19) * 100;
|
||||
|
||||
/*
|
||||
|
@ -521,6 +521,8 @@ static void zfcp_fc_adisc_handler(void *data)
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* re-init to undo drop from zfcp_fc_adisc() */
|
||||
port->d_id = ntoh24(adisc_resp->adisc_port_id);
|
||||
/* port is good, unblock rport without going through erp */
|
||||
zfcp_scsi_schedule_rport_register(port);
|
||||
out:
|
||||
@ -534,6 +536,7 @@ static int zfcp_fc_adisc(struct zfcp_port *port)
|
||||
struct zfcp_fc_req *fc_req;
|
||||
struct zfcp_adapter *adapter = port->adapter;
|
||||
struct Scsi_Host *shost = adapter->scsi_host;
|
||||
u32 d_id;
|
||||
int ret;
|
||||
|
||||
fc_req = kmem_cache_zalloc(zfcp_fc_req_cache, GFP_ATOMIC);
|
||||
@ -558,7 +561,15 @@ static int zfcp_fc_adisc(struct zfcp_port *port)
|
||||
fc_req->u.adisc.req.adisc_cmd = ELS_ADISC;
|
||||
hton24(fc_req->u.adisc.req.adisc_port_id, fc_host_port_id(shost));
|
||||
|
||||
ret = zfcp_fsf_send_els(adapter, port->d_id, &fc_req->ct_els,
|
||||
d_id = port->d_id; /* remember as destination for send els below */
|
||||
/*
|
||||
* Force fresh GID_PN lookup on next port recovery.
|
||||
* Must happen after request setup and before sending request,
|
||||
* to prevent race with port->d_id re-init in zfcp_fc_adisc_handler().
|
||||
*/
|
||||
port->d_id = 0;
|
||||
|
||||
ret = zfcp_fsf_send_els(adapter, d_id, &fc_req->ct_els,
|
||||
ZFCP_FC_CTELS_TMO);
|
||||
if (ret)
|
||||
kmem_cache_free(zfcp_fc_req_cache, fc_req);
|
||||
|
@ -80,7 +80,7 @@ static int bnx2fc_bind_pcidev(struct bnx2fc_hba *hba);
|
||||
static void bnx2fc_unbind_pcidev(struct bnx2fc_hba *hba);
|
||||
static struct fc_lport *bnx2fc_if_create(struct bnx2fc_interface *interface,
|
||||
struct device *parent, int npiv);
|
||||
static void bnx2fc_destroy_work(struct work_struct *work);
|
||||
static void bnx2fc_port_destroy(struct fcoe_port *port);
|
||||
|
||||
static struct bnx2fc_hba *bnx2fc_hba_lookup(struct net_device *phys_dev);
|
||||
static struct bnx2fc_interface *bnx2fc_interface_lookup(struct net_device
|
||||
@ -515,7 +515,8 @@ static int bnx2fc_l2_rcv_thread(void *arg)
|
||||
|
||||
static void bnx2fc_recv_frame(struct sk_buff *skb)
|
||||
{
|
||||
u32 fr_len;
|
||||
u64 crc_err;
|
||||
u32 fr_len, fr_crc;
|
||||
struct fc_lport *lport;
|
||||
struct fcoe_rcv_info *fr;
|
||||
struct fc_stats *stats;
|
||||
@ -549,6 +550,11 @@ static void bnx2fc_recv_frame(struct sk_buff *skb)
|
||||
skb_pull(skb, sizeof(struct fcoe_hdr));
|
||||
fr_len = skb->len - sizeof(struct fcoe_crc_eof);
|
||||
|
||||
stats = per_cpu_ptr(lport->stats, get_cpu());
|
||||
stats->RxFrames++;
|
||||
stats->RxWords += fr_len / FCOE_WORD_TO_BYTE;
|
||||
put_cpu();
|
||||
|
||||
fp = (struct fc_frame *)skb;
|
||||
fc_frame_init(fp);
|
||||
fr_dev(fp) = lport;
|
||||
@ -631,16 +637,15 @@ static void bnx2fc_recv_frame(struct sk_buff *skb)
|
||||
return;
|
||||
}
|
||||
|
||||
stats = per_cpu_ptr(lport->stats, smp_processor_id());
|
||||
stats->RxFrames++;
|
||||
stats->RxWords += fr_len / FCOE_WORD_TO_BYTE;
|
||||
fr_crc = le32_to_cpu(fr_crc(fp));
|
||||
|
||||
if (le32_to_cpu(fr_crc(fp)) !=
|
||||
~crc32(~0, skb->data, fr_len)) {
|
||||
if (stats->InvalidCRCCount < 5)
|
||||
if (unlikely(fr_crc != ~crc32(~0, skb->data, fr_len))) {
|
||||
stats = per_cpu_ptr(lport->stats, get_cpu());
|
||||
crc_err = (stats->InvalidCRCCount++);
|
||||
put_cpu();
|
||||
if (crc_err < 5)
|
||||
printk(KERN_WARNING PFX "dropping frame with "
|
||||
"CRC error\n");
|
||||
stats->InvalidCRCCount++;
|
||||
kfree_skb(skb);
|
||||
return;
|
||||
}
|
||||
@ -911,9 +916,6 @@ static void bnx2fc_indicate_netevent(void *context, unsigned long event,
|
||||
__bnx2fc_destroy(interface);
|
||||
}
|
||||
mutex_unlock(&bnx2fc_dev_lock);
|
||||
|
||||
/* Ensure ALL destroy work has been completed before return */
|
||||
flush_workqueue(bnx2fc_wq);
|
||||
return;
|
||||
|
||||
default:
|
||||
@ -1220,8 +1222,8 @@ static int bnx2fc_vport_destroy(struct fc_vport *vport)
|
||||
mutex_unlock(&n_port->lp_mutex);
|
||||
bnx2fc_free_vport(interface->hba, port->lport);
|
||||
bnx2fc_port_shutdown(port->lport);
|
||||
bnx2fc_port_destroy(port);
|
||||
bnx2fc_interface_put(interface);
|
||||
queue_work(bnx2fc_wq, &port->destroy_work);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -1530,7 +1532,6 @@ static struct fc_lport *bnx2fc_if_create(struct bnx2fc_interface *interface,
|
||||
port->lport = lport;
|
||||
port->priv = interface;
|
||||
port->get_netdev = bnx2fc_netdev;
|
||||
INIT_WORK(&port->destroy_work, bnx2fc_destroy_work);
|
||||
|
||||
/* Configure fcoe_port */
|
||||
rc = bnx2fc_lport_config(lport);
|
||||
@ -1658,8 +1659,8 @@ static void __bnx2fc_destroy(struct bnx2fc_interface *interface)
|
||||
bnx2fc_interface_cleanup(interface);
|
||||
bnx2fc_stop(interface);
|
||||
list_del(&interface->list);
|
||||
bnx2fc_port_destroy(port);
|
||||
bnx2fc_interface_put(interface);
|
||||
queue_work(bnx2fc_wq, &port->destroy_work);
|
||||
}
|
||||
|
||||
/**
|
||||
@ -1700,15 +1701,12 @@ netdev_err:
|
||||
return rc;
|
||||
}
|
||||
|
||||
static void bnx2fc_destroy_work(struct work_struct *work)
|
||||
static void bnx2fc_port_destroy(struct fcoe_port *port)
|
||||
{
|
||||
struct fcoe_port *port;
|
||||
struct fc_lport *lport;
|
||||
|
||||
port = container_of(work, struct fcoe_port, destroy_work);
|
||||
lport = port->lport;
|
||||
|
||||
BNX2FC_HBA_DBG(lport, "Entered bnx2fc_destroy_work\n");
|
||||
BNX2FC_HBA_DBG(lport, "Entered %s, destroying lport %p\n", __func__, lport);
|
||||
|
||||
bnx2fc_if_destroy(lport);
|
||||
}
|
||||
@ -2562,9 +2560,6 @@ static void bnx2fc_ulp_exit(struct cnic_dev *dev)
|
||||
__bnx2fc_destroy(interface);
|
||||
mutex_unlock(&bnx2fc_dev_lock);
|
||||
|
||||
/* Ensure ALL destroy work has been completed before return */
|
||||
flush_workqueue(bnx2fc_wq);
|
||||
|
||||
bnx2fc_ulp_stop(hba);
|
||||
/* unregister cnic device */
|
||||
if (test_and_clear_bit(BNX2FC_CNIC_REGISTERED, &hba->reg_with_cnic))
|
||||
|
@ -341,17 +341,12 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int init_clks(struct platform_device *pdev, struct clk **clk)
|
||||
static void init_clks(struct platform_device *pdev, struct clk **clk)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = CLK_NONE + 1; i < CLK_MAX; i++) {
|
||||
for (i = CLK_NONE + 1; i < CLK_MAX; i++)
|
||||
clk[i] = devm_clk_get(&pdev->dev, clk_names[i]);
|
||||
if (IS_ERR(clk[i]))
|
||||
return PTR_ERR(clk[i]);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct scp *init_scp(struct platform_device *pdev,
|
||||
@ -361,7 +356,7 @@ static struct scp *init_scp(struct platform_device *pdev,
|
||||
{
|
||||
struct genpd_onecell_data *pd_data;
|
||||
struct resource *res;
|
||||
int i, j, ret;
|
||||
int i, j;
|
||||
struct scp *scp;
|
||||
struct clk *clk[CLK_MAX];
|
||||
|
||||
@ -416,9 +411,7 @@ static struct scp *init_scp(struct platform_device *pdev,
|
||||
|
||||
pd_data->num_domains = num;
|
||||
|
||||
ret = init_clks(pdev, clk);
|
||||
if (ret)
|
||||
return ERR_PTR(ret);
|
||||
init_clks(pdev, clk);
|
||||
|
||||
for (i = 0; i < num; i++) {
|
||||
struct scp_domain *scpd = &scp->domains[i];
|
||||
|
@ -520,7 +520,7 @@ static void bcm_qspi_chip_select(struct bcm_qspi *qspi, int cs)
|
||||
u32 rd = 0;
|
||||
u32 wr = 0;
|
||||
|
||||
if (qspi->base[CHIP_SELECT]) {
|
||||
if (cs >= 0 && qspi->base[CHIP_SELECT]) {
|
||||
rd = bcm_qspi_read(qspi, CHIP_SELECT, 0);
|
||||
wr = (rd & ~0xff) | (1 << cs);
|
||||
if (rd == wr)
|
||||
|
@ -529,6 +529,11 @@ static int meson_spicc_probe(struct platform_device *pdev)
|
||||
writel_relaxed(0, spicc->base + SPICC_INTREG);
|
||||
|
||||
irq = platform_get_irq(pdev, 0);
|
||||
if (irq < 0) {
|
||||
ret = irq;
|
||||
goto out_master;
|
||||
}
|
||||
|
||||
ret = devm_request_irq(&pdev->dev, irq, meson_spicc_irq,
|
||||
0, NULL, spicc);
|
||||
if (ret) {
|
||||
|
@ -498,7 +498,7 @@ static irqreturn_t mtk_spi_interrupt(int irq, void *dev_id)
|
||||
else
|
||||
mdata->state = MTK_SPI_IDLE;
|
||||
|
||||
if (!master->can_dma(master, master->cur_msg->spi, trans)) {
|
||||
if (!master->can_dma(master, NULL, trans)) {
|
||||
if (trans->rx_buf) {
|
||||
cnt = mdata->xfer_len / 4;
|
||||
ioread32_rep(mdata->base + SPI_RX_DATA_REG,
|
||||
|
@ -332,7 +332,10 @@ static int __init fbtft_driver_module_init(void) \
|
||||
ret = spi_register_driver(&fbtft_driver_spi_driver); \
|
||||
if (ret < 0) \
|
||||
return ret; \
|
||||
return platform_driver_register(&fbtft_driver_platform_driver); \
|
||||
ret = platform_driver_register(&fbtft_driver_platform_driver); \
|
||||
if (ret < 0) \
|
||||
spi_unregister_driver(&fbtft_driver_spi_driver); \
|
||||
return ret; \
|
||||
} \
|
||||
\
|
||||
static void __exit fbtft_driver_module_exit(void) \
|
||||
|
@ -451,6 +451,9 @@ static bool iscsit_tpg_check_network_portal(
|
||||
break;
|
||||
}
|
||||
spin_unlock(&tpg->tpg_np_lock);
|
||||
|
||||
if (match)
|
||||
break;
|
||||
}
|
||||
spin_unlock(&tiqn->tiqn_tpg_lock);
|
||||
|
||||
|
@ -313,6 +313,7 @@ static struct tty_driver *gsm_tty_driver;
|
||||
#define GSM1_ESCAPE_BITS 0x20
|
||||
#define XON 0x11
|
||||
#define XOFF 0x13
|
||||
#define ISO_IEC_646_MASK 0x7F
|
||||
|
||||
static const struct tty_port_operations gsm_port_ops;
|
||||
|
||||
@ -427,7 +428,7 @@ static u8 gsm_encode_modem(const struct gsm_dlci *dlci)
|
||||
modembits |= MDM_RTR;
|
||||
if (dlci->modem_tx & TIOCM_RI)
|
||||
modembits |= MDM_IC;
|
||||
if (dlci->modem_tx & TIOCM_CD)
|
||||
if (dlci->modem_tx & TIOCM_CD || dlci->gsm->initiator)
|
||||
modembits |= MDM_DV;
|
||||
return modembits;
|
||||
}
|
||||
@ -531,7 +532,8 @@ static int gsm_stuff_frame(const u8 *input, u8 *output, int len)
|
||||
int olen = 0;
|
||||
while (len--) {
|
||||
if (*input == GSM1_SOF || *input == GSM1_ESCAPE
|
||||
|| *input == XON || *input == XOFF) {
|
||||
|| (*input & ISO_IEC_646_MASK) == XON
|
||||
|| (*input & ISO_IEC_646_MASK) == XOFF) {
|
||||
*output++ = GSM1_ESCAPE;
|
||||
*output++ = *input++ ^ GSM1_ESCAPE_BITS;
|
||||
olen++;
|
||||
@ -1488,7 +1490,7 @@ static void gsm_dlci_t1(struct timer_list *t)
|
||||
dlci->mode = DLCI_MODE_ADM;
|
||||
gsm_dlci_open(dlci);
|
||||
} else {
|
||||
gsm_dlci_close(dlci);
|
||||
gsm_dlci_begin_close(dlci); /* prevent half open link */
|
||||
}
|
||||
|
||||
break;
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user