This is the 4.19.128 stable release

-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAl7hNfEACgkQONu9yGCS
 aT5aTxAAtqnRa7AYVX4u/pXURNOTJ4pWjI0kL2P9GBncphzLWVuvJ2GgXg12L5ZO
 gqZww3Qha9XRYtx+6KBOnG6BaH+4xVhAh1SxbVxrpkfjwQTCd/+3kI6YszKxnKV8
 eg0lf8ZMaF9zZkEqlMUINASvQfS0yJu68VsiCKuJOTiakVBdxNbCKAxhzdcVat3w
 JKUYR3M+EF4kEgytL2qr0rhyiEKCOu7zf1KDgQvuqJDiB68gepsyRPDDJzO8ttDd
 NWxyTb0JPu19AVZjlAG+YhnFrQRKrhD+ToQ44t3ze3/TXZ2PqsgYa2MqvZDLBOEe
 JxGV+3k7BINlCoz/1T9wm9JJKTanvWAFd9URmxcqwZRKddcX2gumbzpObEQ/o/Xg
 I/AjXAcNV3jWSCeVXkiXe598RPvJpn65Khg8NBojUSTN0prZCwGw9BrISKtIWhT3
 CBjzaXfbCQjMgetMoChVQzHzVPIZ80VrX0QXFOy4mJ6qixKmLU6UZPE7IeGc2MT8
 okVN+QxCkKE5TKF0YTepT8MNiKdmphEIRG266V+wZn9546NB2iQRq+6TgvvPhtSh
 9q9e0Z37ynTHaYFAUYM9u5VEzaGB1+cuq1mz1RVLwfU6aFhw2pcsSzWeroWODGKO
 Fm8YbEOFXNb6riHlQrdUrOoZ7uHg4/m40lv0GM90l2BsMRqPrhY=
 =w9Sc
 -----END PGP SIGNATURE-----

Merge 4.19.128 into android-4.19-stable

Changes in 4.19.128
	devinet: fix memleak in inetdev_init()
	l2tp: add sk_family checks to l2tp_validate_socket
	l2tp: do not use inet_hash()/inet_unhash()
	net: usb: qmi_wwan: add Telit LE910C1-EUX composition
	NFC: st21nfca: add missed kfree_skb() in an error path
	vsock: fix timeout in vsock_accept()
	net: check untrusted gso_size at kernel entry
	USB: serial: qcserial: add DW5816e QDL support
	USB: serial: usb_wwan: do not resubmit rx urb on fatal errors
	USB: serial: option: add Telit LE910C1-EUX compositions
	iio: vcnl4000: Fix i2c swapped word reading.
	usb: musb: start session in resume for host port
	usb: musb: Fix runtime PM imbalance on error
	vt: keyboard: avoid signed integer overflow in k_ascii
	tty: hvc_console, fix crashes on parallel open/close
	staging: rtl8712: Fix IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK
	CDC-ACM: heed quirk also in error handling
	nvmem: qfprom: remove incorrect write support
	x86/cpu: Add a steppings field to struct x86_cpu_id
	x86/cpu: Add 'table' argument to cpu_matches()
	x86/speculation: Add Special Register Buffer Data Sampling (SRBDS) mitigation
	x86/speculation: Add SRBDS vulnerability and mitigation documentation
	x86/speculation: Add Ivy Bridge to affected list
	uprobes: ensure that uprobe->offset and ->ref_ctr_offset are properly aligned
	Revert "net/mlx5: Annotate mutex destroy for root ns"
	Linux 4.19.128

Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: If3a899efc4809d24257107dd0016a97beb3cb6e9
This commit is contained in:
Greg Kroah-Hartman 2020-06-11 09:16:29 +02:00
commit d8cc60ec42
35 changed files with 500 additions and 98 deletions

View File

@ -478,6 +478,7 @@ What: /sys/devices/system/cpu/vulnerabilities
/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
/sys/devices/system/cpu/vulnerabilities/l1tf
/sys/devices/system/cpu/vulnerabilities/mds
/sys/devices/system/cpu/vulnerabilities/srbds
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
/sys/devices/system/cpu/vulnerabilities/itlb_multihit
Date: January 2018

View File

@ -14,3 +14,4 @@ are configurable at compile, boot or run time.
mds
tsx_async_abort
multihit.rst
special-register-buffer-data-sampling.rst

View File

@ -0,0 +1,149 @@
.. SPDX-License-Identifier: GPL-2.0
SRBDS - Special Register Buffer Data Sampling
=============================================
SRBDS is a hardware vulnerability that allows MDS :doc:`mds` techniques to
infer values returned from special register accesses. Special register
accesses are accesses to off core registers. According to Intel's evaluation,
the special register reads that have a security expectation of privacy are
RDRAND, RDSEED and SGX EGETKEY.
When RDRAND, RDSEED and EGETKEY instructions are used, the data is moved
to the core through the special register mechanism that is susceptible
to MDS attacks.
Affected processors
--------------------
Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED may
be affected.
A processor is affected by SRBDS if its Family_Model and stepping is
in the following list, with the exception of the listed processors
exporting MDS_NO while Intel TSX is available yet not enabled. The
latter class of processors are only affected when Intel TSX is enabled
by software using TSX_CTRL_MSR otherwise they are not affected.
============= ============ ========
common name Family_Model Stepping
============= ============ ========
IvyBridge 06_3AH All
Haswell 06_3CH All
Haswell_L 06_45H All
Haswell_G 06_46H All
Broadwell_G 06_47H All
Broadwell 06_3DH All
Skylake_L 06_4EH All
Skylake 06_5EH All
Kabylake_L 06_8EH <= 0xC
Kabylake 06_9EH <= 0xD
============= ============ ========
Related CVEs
------------
The following CVE entry is related to this SRBDS issue:
============== ===== =====================================
CVE-2020-0543 SRBDS Special Register Buffer Data Sampling
============== ===== =====================================
Attack scenarios
----------------
An unprivileged user can extract values returned from RDRAND and RDSEED
executed on another core or sibling thread using MDS techniques.
Mitigation mechanism
-------------------
Intel will release microcode updates that modify the RDRAND, RDSEED, and
EGETKEY instructions to overwrite secret special register data in the shared
staging buffer before the secret data can be accessed by another logical
processor.
During execution of the RDRAND, RDSEED, or EGETKEY instructions, off-core
accesses from other logical processors will be delayed until the special
register read is complete and the secret data in the shared staging buffer is
overwritten.
This has three effects on performance:
#. RDRAND, RDSEED, or EGETKEY instructions have higher latency.
#. Executing RDRAND at the same time on multiple logical processors will be
serialized, resulting in an overall reduction in the maximum RDRAND
bandwidth.
#. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other
logical processors that miss their core caches, with an impact similar to
legacy locked cache-line-split accesses.
The microcode updates provide an opt-out mechanism (RNGDS_MITG_DIS) to disable
the mitigation for RDRAND and RDSEED instructions executed outside of Intel
Software Guard Extensions (Intel SGX) enclaves. On logical processors that
disable the mitigation using this opt-out mechanism, RDRAND and RDSEED do not
take longer to execute and do not impact performance of sibling logical
processors memory accesses. The opt-out mechanism does not affect Intel SGX
enclaves (including execution of RDRAND or RDSEED inside an enclave, as well
as EGETKEY execution).
IA32_MCU_OPT_CTRL MSR Definition
--------------------------------
Along with the mitigation for this issue, Intel added a new thread-scope
IA32_MCU_OPT_CTRL MSR, (address 0x123). The presence of this MSR and
RNGDS_MITG_DIS (bit 0) is enumerated by CPUID.(EAX=07H,ECX=0).EDX[SRBDS_CTRL =
9]==1. This MSR is introduced through the microcode update.
Setting IA32_MCU_OPT_CTRL[0] (RNGDS_MITG_DIS) to 1 for a logical processor
disables the mitigation for RDRAND and RDSEED executed outside of an Intel SGX
enclave on that logical processor. Opting out of the mitigation for a
particular logical processor does not affect the RDRAND and RDSEED mitigations
for other logical processors.
Note that inside of an Intel SGX enclave, the mitigation is applied regardless
of the value of RNGDS_MITG_DS.
Mitigation control on the kernel command line
---------------------------------------------
The kernel command line allows control over the SRBDS mitigation at boot time
with the option "srbds=". The option for this is:
============= =============================================================
off This option disables SRBDS mitigation for RDRAND and RDSEED on
affected platforms.
============= =============================================================
SRBDS System Information
-----------------------
The Linux kernel provides vulnerability status information through sysfs. For
SRBDS this can be accessed by the following sysfs file:
/sys/devices/system/cpu/vulnerabilities/srbds
The possible values contained in this file are:
============================== =============================================
Not affected Processor not vulnerable
Vulnerable Processor vulnerable and mitigation disabled
Vulnerable: No microcode Processor vulnerable and microcode is missing
mitigation
Mitigation: Microcode Processor is vulnerable and mitigation is in
effect.
Mitigation: TSX disabled Processor is only vulnerable when TSX is
enabled while this system was booted with TSX
disabled.
Unknown: Dependent on
hypervisor status Running on virtual guest processor that is
affected but with no way to know if host
processor is mitigated or vulnerable.
============================== =============================================
SRBDS Default mitigation
------------------------
This new microcode serializes processor access during execution of RDRAND,
RDSEED ensures that the shared buffer is overwritten before it is released for
reuse. Use the "srbds=off" kernel command line to disable the mitigation for
RDRAND and RDSEED.

View File

@ -4436,6 +4436,26 @@
spia_pedr=
spia_peddr=
srbds= [X86,INTEL]
Control the Special Register Buffer Data Sampling
(SRBDS) mitigation.
Certain CPUs are vulnerable to an MDS-like
exploit which can leak bits from the random
number generator.
By default, this issue is mitigated by
microcode. However, the microcode fix can cause
the RDRAND and RDSEED instructions to become
much slower. Among other effects, this will
result in reduced throughput from /dev/urandom.
The microcode mitigation can be disabled with
the following option:
off: Disable mitigation and remove
performance impact to RDRAND and RDSEED
srcutree.counter_wrap_check [KNL]
Specifies how frequently to check for
grace-period sequence counter wrap for the

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 4
PATCHLEVEL = 19
SUBLEVEL = 127
SUBLEVEL = 128
EXTRAVERSION =
NAME = "People's Front"

View File

@ -9,6 +9,33 @@
#include <linux/mod_devicetable.h>
#define X86_STEPPINGS(mins, maxs) GENMASK(maxs, mins)
/**
* X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU matching
* @_vendor: The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
* The name is expanded to X86_VENDOR_@_vendor
* @_family: The family number or X86_FAMILY_ANY
* @_model: The model number, model constant or X86_MODEL_ANY
* @_steppings: Bitmask for steppings, stepping constant or X86_STEPPING_ANY
* @_feature: A X86_FEATURE bit or X86_FEATURE_ANY
* @_data: Driver specific data or NULL. The internal storage
* format is unsigned long. The supplied value, pointer
* etc. is casted to unsigned long internally.
*
* Backport version to keep the SRBDS pile consistant. No shorter variants
* required for this.
*/
#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \
_steppings, _feature, _data) { \
.vendor = X86_VENDOR_##_vendor, \
.family = _family, \
.model = _model, \
.steppings = _steppings, \
.feature = _feature, \
.driver_data = (unsigned long) _data \
}
extern const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match);
#endif

View File

@ -347,6 +347,7 @@
/* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
#define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* AVX-512 Neural Network Instructions */
#define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
#define X86_FEATURE_SRBDS_CTRL (18*32+ 9) /* "" SRBDS mitigation MSR available */
#define X86_FEATURE_TSX_FORCE_ABORT (18*32+13) /* "" TSX_FORCE_ABORT */
#define X86_FEATURE_MD_CLEAR (18*32+10) /* VERW clears CPU buffers */
#define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */
@ -391,5 +392,6 @@
#define X86_BUG_SWAPGS X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
#define X86_BUG_TAA X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
#define X86_BUG_ITLB_MULTIHIT X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
#define X86_BUG_SRBDS X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
#endif /* _ASM_X86_CPUFEATURES_H */

View File

@ -110,6 +110,10 @@
#define TSX_CTRL_RTM_DISABLE BIT(0) /* Disable RTM feature */
#define TSX_CTRL_CPUID_CLEAR BIT(1) /* Disable TSX enumeration */
/* SRBDS support */
#define MSR_IA32_MCU_OPT_CTRL 0x00000123
#define RNGDS_MITG_DIS BIT(0)
#define MSR_IA32_SYSENTER_CS 0x00000174
#define MSR_IA32_SYSENTER_ESP 0x00000175
#define MSR_IA32_SYSENTER_EIP 0x00000176

View File

@ -41,6 +41,7 @@ static void __init l1tf_select_mitigation(void);
static void __init mds_select_mitigation(void);
static void __init mds_print_mitigation(void);
static void __init taa_select_mitigation(void);
static void __init srbds_select_mitigation(void);
/* The base value of the SPEC_CTRL MSR that always has to be preserved. */
u64 x86_spec_ctrl_base;
@ -108,6 +109,7 @@ void __init check_bugs(void)
l1tf_select_mitigation();
mds_select_mitigation();
taa_select_mitigation();
srbds_select_mitigation();
/*
* As MDS and TAA mitigations are inter-related, print MDS
@ -390,6 +392,97 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
}
early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
#undef pr_fmt
#define pr_fmt(fmt) "SRBDS: " fmt
enum srbds_mitigations {
SRBDS_MITIGATION_OFF,
SRBDS_MITIGATION_UCODE_NEEDED,
SRBDS_MITIGATION_FULL,
SRBDS_MITIGATION_TSX_OFF,
SRBDS_MITIGATION_HYPERVISOR,
};
static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
static const char * const srbds_strings[] = {
[SRBDS_MITIGATION_OFF] = "Vulnerable",
[SRBDS_MITIGATION_UCODE_NEEDED] = "Vulnerable: No microcode",
[SRBDS_MITIGATION_FULL] = "Mitigation: Microcode",
[SRBDS_MITIGATION_TSX_OFF] = "Mitigation: TSX disabled",
[SRBDS_MITIGATION_HYPERVISOR] = "Unknown: Dependent on hypervisor status",
};
static bool srbds_off;
void update_srbds_msr(void)
{
u64 mcu_ctrl;
if (!boot_cpu_has_bug(X86_BUG_SRBDS))
return;
if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
return;
if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED)
return;
rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
switch (srbds_mitigation) {
case SRBDS_MITIGATION_OFF:
case SRBDS_MITIGATION_TSX_OFF:
mcu_ctrl |= RNGDS_MITG_DIS;
break;
case SRBDS_MITIGATION_FULL:
mcu_ctrl &= ~RNGDS_MITG_DIS;
break;
default:
break;
}
wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
}
static void __init srbds_select_mitigation(void)
{
u64 ia32_cap;
if (!boot_cpu_has_bug(X86_BUG_SRBDS))
return;
/*
* Check to see if this is one of the MDS_NO systems supporting
* TSX that are only exposed to SRBDS when TSX is enabled.
*/
ia32_cap = x86_read_arch_cap_msr();
if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM))
srbds_mitigation = SRBDS_MITIGATION_TSX_OFF;
else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
else if (cpu_mitigations_off() || srbds_off)
srbds_mitigation = SRBDS_MITIGATION_OFF;
update_srbds_msr();
pr_info("%s\n", srbds_strings[srbds_mitigation]);
}
static int __init srbds_parse_cmdline(char *str)
{
if (!str)
return -EINVAL;
if (!boot_cpu_has_bug(X86_BUG_SRBDS))
return 0;
srbds_off = !strcmp(str, "off");
return 0;
}
early_param("srbds", srbds_parse_cmdline);
#undef pr_fmt
#define pr_fmt(fmt) "Spectre V1 : " fmt
@ -1491,6 +1584,11 @@ static char *ibpb_state(void)
return "";
}
static ssize_t srbds_show_state(char *buf)
{
return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
}
static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
char *buf, unsigned int bug)
{
@ -1535,6 +1633,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
case X86_BUG_ITLB_MULTIHIT:
return itlb_multihit_show_state(buf);
case X86_BUG_SRBDS:
return srbds_show_state(buf);
default:
break;
}
@ -1581,4 +1682,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
{
return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
}
ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf)
{
return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
}
#endif

View File

@ -1013,9 +1013,30 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
{}
};
static bool __init cpu_matches(unsigned long which)
#define VULNBL_INTEL_STEPPINGS(model, steppings, issues) \
X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6, \
INTEL_FAM6_##model, steppings, \
X86_FEATURE_ANY, issues)
#define SRBDS BIT(0)
static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS),
VULNBL_INTEL_STEPPINGS(HASWELL_CORE, X86_STEPPING_ANY, SRBDS),
VULNBL_INTEL_STEPPINGS(HASWELL_ULT, X86_STEPPING_ANY, SRBDS),
VULNBL_INTEL_STEPPINGS(HASWELL_GT3E, X86_STEPPING_ANY, SRBDS),
VULNBL_INTEL_STEPPINGS(BROADWELL_GT3E, X86_STEPPING_ANY, SRBDS),
VULNBL_INTEL_STEPPINGS(BROADWELL_CORE, X86_STEPPING_ANY, SRBDS),
VULNBL_INTEL_STEPPINGS(SKYLAKE_MOBILE, X86_STEPPING_ANY, SRBDS),
VULNBL_INTEL_STEPPINGS(SKYLAKE_DESKTOP, X86_STEPPING_ANY, SRBDS),
VULNBL_INTEL_STEPPINGS(KABYLAKE_MOBILE, X86_STEPPINGS(0x0, 0xC), SRBDS),
VULNBL_INTEL_STEPPINGS(KABYLAKE_DESKTOP,X86_STEPPINGS(0x0, 0xD), SRBDS),
{}
};
static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long which)
{
const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
const struct x86_cpu_id *m = x86_match_cpu(table);
return m && !!(m->driver_data & which);
}
@ -1035,29 +1056,32 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
u64 ia32_cap = x86_read_arch_cap_msr();
/* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
if (!cpu_matches(cpu_vuln_whitelist, NO_ITLB_MULTIHIT) &&
!(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
if (cpu_matches(NO_SPECULATION))
if (cpu_matches(cpu_vuln_whitelist, NO_SPECULATION))
return;
setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) &&
!(ia32_cap & ARCH_CAP_SSB_NO) &&
!cpu_has(c, X86_FEATURE_AMD_SSB_NO))
setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
if (ia32_cap & ARCH_CAP_IBRS_ALL)
setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) &&
!(ia32_cap & ARCH_CAP_MDS_NO)) {
setup_force_cpu_bug(X86_BUG_MDS);
if (cpu_matches(MSBDS_ONLY))
if (cpu_matches(cpu_vuln_whitelist, MSBDS_ONLY))
setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
}
if (!cpu_matches(NO_SWAPGS))
if (!cpu_matches(cpu_vuln_whitelist, NO_SWAPGS))
setup_force_cpu_bug(X86_BUG_SWAPGS);
/*
@ -1075,7 +1099,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
(ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
setup_force_cpu_bug(X86_BUG_TAA);
if (cpu_matches(NO_MELTDOWN))
/*
* SRBDS affects CPUs which support RDRAND or RDSEED and are listed
* in the vulnerability blacklist.
*/
if ((cpu_has(c, X86_FEATURE_RDRAND) ||
cpu_has(c, X86_FEATURE_RDSEED)) &&
cpu_matches(cpu_vuln_blacklist, SRBDS))
setup_force_cpu_bug(X86_BUG_SRBDS);
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
return;
/* Rogue Data Cache Load? No! */
@ -1084,7 +1117,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
if (cpu_matches(NO_L1TF))
if (cpu_matches(cpu_vuln_whitelist, NO_L1TF))
return;
setup_force_cpu_bug(X86_BUG_L1TF);
@ -1519,6 +1552,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
mtrr_ap_init();
validate_apic_and_package_id(c);
x86_spec_ctrl_setup_ap();
update_srbds_msr();
}
static __init int setup_noclflush(char *arg)

View File

@ -80,6 +80,7 @@ extern void detect_ht(struct cpuinfo_x86 *c);
unsigned int aperfmperf_get_khz(int cpu);
extern void x86_spec_ctrl_setup_ap(void);
extern void update_srbds_msr(void);
extern u64 x86_read_arch_cap_msr(void);

View File

@ -34,13 +34,18 @@ const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match)
const struct x86_cpu_id *m;
struct cpuinfo_x86 *c = &boot_cpu_data;
for (m = match; m->vendor | m->family | m->model | m->feature; m++) {
for (m = match;
m->vendor | m->family | m->model | m->steppings | m->feature;
m++) {
if (m->vendor != X86_VENDOR_ANY && c->x86_vendor != m->vendor)
continue;
if (m->family != X86_FAMILY_ANY && c->x86 != m->family)
continue;
if (m->model != X86_MODEL_ANY && c->x86_model != m->model)
continue;
if (m->steppings != X86_STEPPING_ANY &&
!(BIT(c->x86_stepping) & m->steppings))
continue;
if (m->feature != X86_FEATURE_ANY && !cpu_has(c, m->feature))
continue;
return m;

View File

@ -566,6 +566,12 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
return sprintf(buf, "Not affected\n");
}
ssize_t __weak cpu_show_srbds(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sprintf(buf, "Not affected\n");
}
static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
@ -574,6 +580,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
static struct attribute *cpu_root_vulnerabilities_attrs[] = {
&dev_attr_meltdown.attr,
@ -584,6 +591,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
&dev_attr_mds.attr,
&dev_attr_tsx_async_abort.attr,
&dev_attr_itlb_multihit.attr,
&dev_attr_srbds.attr,
NULL
};

View File

@ -166,7 +166,6 @@ static int vcnl4000_measure(struct vcnl4000_data *data, u8 req_mask,
u8 rdy_mask, u8 data_reg, int *val)
{
int tries = 20;
__be16 buf;
int ret;
mutex_lock(&data->vcnl4000_lock);
@ -193,13 +192,12 @@ static int vcnl4000_measure(struct vcnl4000_data *data, u8 req_mask,
goto fail;
}
ret = i2c_smbus_read_i2c_block_data(data->client,
data_reg, sizeof(buf), (u8 *) &buf);
ret = i2c_smbus_read_word_swapped(data->client, data_reg);
if (ret < 0)
goto fail;
mutex_unlock(&data->vcnl4000_lock);
*val = be16_to_cpu(buf);
*val = ret;
return 0;

View File

@ -364,12 +364,6 @@ static void del_sw_ns(struct fs_node *node)
static void del_sw_prio(struct fs_node *node)
{
struct mlx5_flow_root_namespace *root_ns;
struct mlx5_flow_namespace *ns;
fs_get_obj(ns, node);
root_ns = container_of(ns, struct mlx5_flow_root_namespace, ns);
mutex_destroy(&root_ns->chain_lock);
kfree(node);
}

View File

@ -1260,6 +1260,7 @@ static const struct usb_device_id products[] = {
{QMI_FIXED_INTF(0x1bbb, 0x0203, 2)}, /* Alcatel L800MA */
{QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */
{QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */
{QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */
{QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */
{QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */

View File

@ -184,8 +184,10 @@ static int st21nfca_tm_send_atr_res(struct nfc_hci_dev *hdev,
memcpy(atr_res->gbi, atr_req->gbi, gb_len);
r = nfc_set_remote_general_bytes(hdev->ndev, atr_res->gbi,
gb_len);
if (r < 0)
if (r < 0) {
kfree_skb(skb);
return r;
}
}
info->dep_info.curr_nfc_dep_pni = 0;

View File

@ -35,25 +35,11 @@ static int qfprom_reg_read(void *context,
return 0;
}
static int qfprom_reg_write(void *context,
unsigned int reg, void *_val, size_t bytes)
{
struct qfprom_priv *priv = context;
u8 *val = _val;
int i = 0, words = bytes;
while (words--)
writeb(*val++, priv->base + reg + i++);
return 0;
}
static struct nvmem_config econfig = {
.name = "qfprom",
.stride = 1,
.word_size = 1,
.reg_read = qfprom_reg_read,
.reg_write = qfprom_reg_write,
};
static int qfprom_probe(struct platform_device *pdev)

View File

@ -468,7 +468,7 @@ static inline unsigned char *get_hdr_bssid(unsigned char *pframe)
/* block-ack parameters */
#define IEEE80211_ADDBA_PARAM_POLICY_MASK 0x0002
#define IEEE80211_ADDBA_PARAM_TID_MASK 0x003C
#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFA0
#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFC0
#define IEEE80211_DELBA_PARAM_TID_MASK 0xF000
#define IEEE80211_DELBA_PARAM_INITIATOR_MASK 0x0800
@ -562,13 +562,6 @@ struct ieee80211_ht_addt_info {
#define IEEE80211_HT_IE_NON_GF_STA_PRSNT 0x0004
#define IEEE80211_HT_IE_NON_HT_STA_PRSNT 0x0010
/* block-ack parameters */
#define IEEE80211_ADDBA_PARAM_POLICY_MASK 0x0002
#define IEEE80211_ADDBA_PARAM_TID_MASK 0x003C
#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFA0
#define IEEE80211_DELBA_PARAM_TID_MASK 0xF000
#define IEEE80211_DELBA_PARAM_INITIATOR_MASK 0x0800
/*
* A-PMDU buffer sizes
* According to IEEE802.11n spec size varies from 8K to 64K (in powers of 2)

View File

@ -371,15 +371,14 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
* tty fields and return the kref reference.
*/
if (rc) {
tty_port_tty_set(&hp->port, NULL);
tty->driver_data = NULL;
tty_port_put(&hp->port);
printk(KERN_ERR "hvc_open: request_irq failed with rc %d.\n", rc);
} else
} else {
/* We are ready... raise DTR/RTS */
if (C_BAUD(tty))
if (hp->ops->dtr_rts)
hp->ops->dtr_rts(hp, 1);
tty_port_set_initialized(&hp->port, true);
}
/* Force wakeup of the polling thread */
hvc_kick();
@ -389,22 +388,12 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
static void hvc_close(struct tty_struct *tty, struct file * filp)
{
struct hvc_struct *hp;
struct hvc_struct *hp = tty->driver_data;
unsigned long flags;
if (tty_hung_up_p(filp))
return;
/*
* No driver_data means that this close was issued after a failed
* hvc_open by the tty layer's release_dev() function and we can just
* exit cleanly because the kref reference wasn't made.
*/
if (!tty->driver_data)
return;
hp = tty->driver_data;
spin_lock_irqsave(&hp->port.lock, flags);
if (--hp->port.count == 0) {
@ -412,6 +401,9 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
/* We are done with the tty pointer now. */
tty_port_tty_set(&hp->port, NULL);
if (!tty_port_initialized(&hp->port))
return;
if (C_HUPCL(tty))
if (hp->ops->dtr_rts)
hp->ops->dtr_rts(hp, 0);
@ -428,6 +420,7 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
* waking periodically to check chars_in_buffer().
*/
tty_wait_until_sent(tty, HVC_CLOSE_WAIT);
tty_port_set_initialized(&hp->port, false);
} else {
if (hp->port.count < 0)
printk(KERN_ERR "hvc_close %X: oops, count is %d\n",

View File

@ -127,7 +127,11 @@ static DEFINE_SPINLOCK(func_buf_lock); /* guard 'func_buf' and friends */
static unsigned long key_down[BITS_TO_LONGS(KEY_CNT)]; /* keyboard key bitmap */
static unsigned char shift_down[NR_SHIFT]; /* shift state counters.. */
static bool dead_key_next;
static int npadch = -1; /* -1 or number assembled on pad */
/* Handles a number being assembled on the number pad */
static bool npadch_active;
static unsigned int npadch_value;
static unsigned int diacr;
static char rep; /* flag telling character repeat */
@ -845,12 +849,12 @@ static void k_shift(struct vc_data *vc, unsigned char value, char up_flag)
shift_state &= ~(1 << value);
/* kludge */
if (up_flag && shift_state != old_state && npadch != -1) {
if (up_flag && shift_state != old_state && npadch_active) {
if (kbd->kbdmode == VC_UNICODE)
to_utf8(vc, npadch);
to_utf8(vc, npadch_value);
else
put_queue(vc, npadch & 0xff);
npadch = -1;
put_queue(vc, npadch_value & 0xff);
npadch_active = false;
}
}
@ -868,7 +872,7 @@ static void k_meta(struct vc_data *vc, unsigned char value, char up_flag)
static void k_ascii(struct vc_data *vc, unsigned char value, char up_flag)
{
int base;
unsigned int base;
if (up_flag)
return;
@ -882,10 +886,12 @@ static void k_ascii(struct vc_data *vc, unsigned char value, char up_flag)
base = 16;
}
if (npadch == -1)
npadch = value;
else
npadch = npadch * base + value;
if (!npadch_active) {
npadch_value = 0;
npadch_active = true;
}
npadch_value = npadch_value * base + value;
}
static void k_lock(struct vc_data *vc, unsigned char value, char up_flag)

View File

@ -590,7 +590,7 @@ static void acm_softint(struct work_struct *work)
}
if (test_and_clear_bit(ACM_ERROR_DELAY, &acm->flags)) {
for (i = 0; i < ACM_NR; i++)
for (i = 0; i < acm->rx_buflimit; i++)
if (test_and_clear_bit(i, &acm->urbs_in_error_delay))
acm_submit_read_urb(acm, i, GFP_NOIO);
}

View File

@ -2737,6 +2737,13 @@ static int musb_resume(struct device *dev)
musb_enable_interrupts(musb);
musb_platform_enable(musb);
/* session might be disabled in suspend */
if (musb->port_mode == MUSB_HOST &&
!(musb->ops->quirks & MUSB_PRESERVE_SESSION)) {
devctl |= MUSB_DEVCTL_SESSION;
musb_writeb(musb->mregs, MUSB_DEVCTL, devctl);
}
spin_lock_irqsave(&musb->lock, flags);
error = musb_run_resume_work(musb);
if (error)

View File

@ -168,6 +168,11 @@ static ssize_t musb_test_mode_write(struct file *file,
u8 test;
char buf[24];
memset(buf, 0x00, sizeof(buf));
if (copy_from_user(buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
return -EFAULT;
pm_runtime_get_sync(musb->controller);
test = musb_readb(musb->mregs, MUSB_TESTMODE);
if (test) {
@ -176,11 +181,6 @@ static ssize_t musb_test_mode_write(struct file *file,
goto ret;
}
memset(buf, 0x00, sizeof(buf));
if (copy_from_user(buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
return -EFAULT;
if (strstarts(buf, "force host full-speed"))
test = MUSB_TEST_FORCE_HOST | MUSB_TEST_FORCE_FS;

View File

@ -1157,6 +1157,10 @@ static const struct usb_device_id option_ids[] = {
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_SINGLE) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_DE910_DUAL) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1031, 0xff), /* Telit LE910C1-EUX */
.driver_info = NCTRL(0) | RSVD(3) },
{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1033, 0xff), /* Telit LE910C1-EUX (ECM) */
.driver_info = NCTRL(0) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0),
.driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1),

View File

@ -173,6 +173,7 @@ static const struct usb_device_id id_table[] = {
{DEVICE_SWI(0x413c, 0x81b3)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
{DEVICE_SWI(0x413c, 0x81b5)}, /* Dell Wireless 5811e QDL */
{DEVICE_SWI(0x413c, 0x81b6)}, /* Dell Wireless 5811e QDL */
{DEVICE_SWI(0x413c, 0x81cb)}, /* Dell Wireless 5816e QDL */
{DEVICE_SWI(0x413c, 0x81cc)}, /* Dell Wireless 5816e */
{DEVICE_SWI(0x413c, 0x81cf)}, /* Dell Wireless 5819 */
{DEVICE_SWI(0x413c, 0x81d0)}, /* Dell Wireless 5819 */

View File

@ -299,6 +299,10 @@ static void usb_wwan_indat_callback(struct urb *urb)
if (status) {
dev_dbg(dev, "%s: nonzero status: %d on endpoint %02x.\n",
__func__, status, endpoint);
/* don't resubmit on fatal errors */
if (status == -ESHUTDOWN || status == -ENOENT)
return;
} else {
if (urb->actual_length) {
tty_insert_flip_string(&port->port, data,

View File

@ -621,6 +621,10 @@ struct mips_cdmm_device_id {
/*
* MODULE_DEVICE_TABLE expects this struct to be called x86cpu_device_id.
* Although gcc seems to ignore this error, clang fails without this define.
*
* Note: The ordering of the struct is different from upstream because the
* static initializers in kernels < 5.7 still use C89 style while upstream
* has been converted to proper C99 initializers.
*/
#define x86cpu_device_id x86_cpu_id
struct x86_cpu_id {
@ -629,6 +633,7 @@ struct x86_cpu_id {
__u16 model;
__u16 feature; /* bit index */
kernel_ulong_t driver_data;
__u16 steppings;
};
#define X86_FEATURE_MATCH(x) \
@ -637,6 +642,7 @@ struct x86_cpu_id {
#define X86_VENDOR_ANY 0xffff
#define X86_FAMILY_ANY 0
#define X86_MODEL_ANY 0
#define X86_STEPPING_ANY 0
#define X86_FEATURE_ANY 0 /* Same as FPU, you can't test for that */
/*

View File

@ -31,6 +31,7 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
{
unsigned int gso_type = 0;
unsigned int thlen = 0;
unsigned int p_off = 0;
unsigned int ip_proto;
if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
@ -68,7 +69,8 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
if (!skb_partial_csum_set(skb, start, off))
return -EINVAL;
if (skb_transport_offset(skb) + thlen > skb_headlen(skb))
p_off = skb_transport_offset(skb) + thlen;
if (p_off > skb_headlen(skb))
return -EINVAL;
} else {
/* gso packets without NEEDS_CSUM do not set transport_offset.
@ -92,17 +94,25 @@ retry:
return -EINVAL;
}
if (keys.control.thoff + thlen > skb_headlen(skb) ||
p_off = keys.control.thoff + thlen;
if (p_off > skb_headlen(skb) ||
keys.basic.ip_proto != ip_proto)
return -EINVAL;
skb_set_transport_header(skb, keys.control.thoff);
} else if (gso_type) {
p_off = thlen;
if (p_off > skb_headlen(skb))
return -EINVAL;
}
}
if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
u16 gso_size = __virtio16_to_cpu(little_endian, hdr->gso_size);
if (skb->len - p_off <= gso_size)
return -EINVAL;
skb_shinfo(skb)->gso_size = gso_size;
skb_shinfo(skb)->gso_type = gso_type;

View File

@ -612,10 +612,6 @@ static int prepare_uprobe(struct uprobe *uprobe, struct file *file,
if (ret)
goto out;
/* uprobe_write_opcode() assumes we don't cross page boundary */
BUG_ON((uprobe->offset & ~PAGE_MASK) +
UPROBE_SWBP_INSN_SIZE > PAGE_SIZE);
smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */
set_bit(UPROBE_COPY_INSN, &uprobe->flags);
@ -911,6 +907,13 @@ static int __uprobe_register(struct inode *inode, loff_t offset,
if (offset > i_size_read(inode))
return -EINVAL;
/*
* This ensures that copy_from_page() and copy_to_page()
* can't cross page boundary.
*/
if (!IS_ALIGNED(offset, UPROBE_SWBP_INSN_SIZE))
return -EINVAL;
retry:
uprobe = alloc_uprobe(inode, offset);
if (!uprobe)
@ -1708,6 +1711,9 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr)
uprobe_opcode_t opcode;
int result;
if (WARN_ON_ONCE(!IS_ALIGNED(vaddr, UPROBE_SWBP_INSN_SIZE)))
return -EINVAL;
pagefault_disable();
result = __get_user(opcode, (uprobe_opcode_t __user *)vaddr);
pagefault_enable();

View File

@ -269,6 +269,7 @@ static struct in_device *inetdev_init(struct net_device *dev)
err = devinet_sysctl_register(in_dev);
if (err) {
in_dev->dead = 1;
neigh_parms_release(&arp_tbl, in_dev->arp_parms);
in_dev_put(in_dev);
in_dev = NULL;
goto out;

View File

@ -1463,6 +1463,9 @@ static int l2tp_validate_socket(const struct sock *sk, const struct net *net,
if (sk->sk_type != SOCK_DGRAM)
return -EPROTONOSUPPORT;
if (sk->sk_family != PF_INET && sk->sk_family != PF_INET6)
return -EPROTONOSUPPORT;
if ((encap == L2TP_ENCAPTYPE_UDP && sk->sk_protocol != IPPROTO_UDP) ||
(encap == L2TP_ENCAPTYPE_IP && sk->sk_protocol != IPPROTO_L2TP))
return -EPROTONOSUPPORT;

View File

@ -24,7 +24,6 @@
#include <net/icmp.h>
#include <net/udp.h>
#include <net/inet_common.h>
#include <net/inet_hashtables.h>
#include <net/tcp_states.h>
#include <net/protocol.h>
#include <net/xfrm.h>
@ -213,15 +212,31 @@ discard:
return 0;
}
static int l2tp_ip_hash(struct sock *sk)
{
if (sk_unhashed(sk)) {
write_lock_bh(&l2tp_ip_lock);
sk_add_node(sk, &l2tp_ip_table);
write_unlock_bh(&l2tp_ip_lock);
}
return 0;
}
static void l2tp_ip_unhash(struct sock *sk)
{
if (sk_unhashed(sk))
return;
write_lock_bh(&l2tp_ip_lock);
sk_del_node_init(sk);
write_unlock_bh(&l2tp_ip_lock);
}
static int l2tp_ip_open(struct sock *sk)
{
/* Prevent autobind. We don't have ports. */
inet_sk(sk)->inet_num = IPPROTO_L2TP;
write_lock_bh(&l2tp_ip_lock);
sk_add_node(sk, &l2tp_ip_table);
write_unlock_bh(&l2tp_ip_lock);
l2tp_ip_hash(sk);
return 0;
}
@ -598,8 +613,8 @@ static struct proto l2tp_ip_prot = {
.sendmsg = l2tp_ip_sendmsg,
.recvmsg = l2tp_ip_recvmsg,
.backlog_rcv = l2tp_ip_backlog_recv,
.hash = inet_hash,
.unhash = inet_unhash,
.hash = l2tp_ip_hash,
.unhash = l2tp_ip_unhash,
.obj_size = sizeof(struct l2tp_ip_sock),
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_ip_setsockopt,

View File

@ -24,8 +24,6 @@
#include <net/icmp.h>
#include <net/udp.h>
#include <net/inet_common.h>
#include <net/inet_hashtables.h>
#include <net/inet6_hashtables.h>
#include <net/tcp_states.h>
#include <net/protocol.h>
#include <net/xfrm.h>
@ -226,15 +224,31 @@ discard:
return 0;
}
static int l2tp_ip6_hash(struct sock *sk)
{
if (sk_unhashed(sk)) {
write_lock_bh(&l2tp_ip6_lock);
sk_add_node(sk, &l2tp_ip6_table);
write_unlock_bh(&l2tp_ip6_lock);
}
return 0;
}
static void l2tp_ip6_unhash(struct sock *sk)
{
if (sk_unhashed(sk))
return;
write_lock_bh(&l2tp_ip6_lock);
sk_del_node_init(sk);
write_unlock_bh(&l2tp_ip6_lock);
}
static int l2tp_ip6_open(struct sock *sk)
{
/* Prevent autobind. We don't have ports. */
inet_sk(sk)->inet_num = IPPROTO_L2TP;
write_lock_bh(&l2tp_ip6_lock);
sk_add_node(sk, &l2tp_ip6_table);
write_unlock_bh(&l2tp_ip6_lock);
l2tp_ip6_hash(sk);
return 0;
}
@ -732,8 +746,8 @@ static struct proto l2tp_ip6_prot = {
.sendmsg = l2tp_ip6_sendmsg,
.recvmsg = l2tp_ip6_recvmsg,
.backlog_rcv = l2tp_ip6_backlog_recv,
.hash = inet6_hash,
.unhash = inet_unhash,
.hash = l2tp_ip6_hash,
.unhash = l2tp_ip6_unhash,
.obj_size = sizeof(struct l2tp_ip6_sock),
#ifdef CONFIG_COMPAT
.compat_setsockopt = compat_ipv6_setsockopt,

View File

@ -1283,7 +1283,7 @@ static int vsock_accept(struct socket *sock, struct socket *newsock, int flags,
/* Wait for children sockets to appear; these are the new sockets
* created upon connection establishment.
*/
timeout = sock_sndtimeo(listener, flags & O_NONBLOCK);
timeout = sock_rcvtimeo(listener, flags & O_NONBLOCK);
prepare_to_wait(sk_sleep(listener), &wait, TASK_INTERRUPTIBLE);
while ((connected = vsock_dequeue_accept(listener)) == NULL &&