Revert existing FBE changes for ICE FBE

Revert existing FBE kernel changes for ICE upstream.
Revert UFS qcom specific ice changes.
Revert all sdhci related ICE changes formatted.
defconfig: Remove old FBE/ICE defconfigs.

Change-Id: I4d77927b6373b3bb3edfe3b060d1de272a54a426
Signed-off-by: Gaurav Kashyap <gaurkash@codeaurora.org>
Signed-off-by: Neeraj Soni <neersoni@codeaurora.org>
This commit is contained in:
Gaurav Kashyap 2019-09-03 20:31:27 -07:00 committed by Blagovest Kolenichev
parent 5483493a93
commit 2ceec83a4f
94 changed files with 117 additions and 7858 deletions

View File

@ -1,235 +0,0 @@
Introduction:
=============
Storage encryption has been one of the most required feature from security
point of view. QTI based storage encryption solution uses general purpose
crypto engine. While this kind of solution provide a decent amount of
performance, it falls short as storage speed is improving significantly
continuously. To overcome performance degradation, newer chips are going to
have Inline Crypto Engine (ICE) embedded into storage device. ICE is supposed
to meet the line speed of storage devices.
Hardware Description
====================
ICE is a HW block that is embedded into storage device such as UFS/eMMC. By
default, ICE works in bypass mode i.e. ICE HW does not perform any crypto
operation on data to be processed by storage device. If required, ICE can be
configured to perform crypto operation in one direction (i.e. either encryption
or decryption) or in both direction(both encryption & decryption).
When a switch between the operation modes(plain to crypto or crypto to plain)
is desired for a particular partition, SW must complete all transactions for
that particular partition before switching the crypto mode i.e. no crypto, one
direction crypto or both direction crypto operation. Requests for other
partitions are not impacted due to crypto mode switch.
ICE HW currently supports AES128/256 bit ECB & XTS mode encryption algorithms.
Keys for crypto operations are loaded from SW. Keys are stored in a lookup
table(LUT) located inside ICE HW. Maximum of 32 keys can be loaded in ICE key
LUT. A Key inside the LUT can be referred using a key index.
SW Description
==============
ICE HW has catagorized ICE registers in 2 groups: those which can be accessed by
only secure side i.e. TZ and those which can be accessed by non-secure side such
as HLOS as well. This requires that ICE driver to be split in two pieces: one
running from TZ space and another from HLOS space.
ICE driver from TZ would configure keys as requested by HLOS side.
ICE driver on HLOS side is responsible for initialization of ICE HW.
SW Architecture Diagram
=======================
Following are all the components involved in the ICE driver for control path:
+++++++++++++++++++++++++++++++++++++++++
+ App layer +
+++++++++++++++++++++++++++++++++++++++++
+ System layer +
+ ++++++++ +++++++ +
+ + VOLD + + PFM + +
+ ++++++++ +++++++ +
+ || || +
+ || || +
+ \/ \/ +
+ ++++++++++++++ +
+ + LibQSEECom + +
+ ++++++++++++++ +
+++++++++++++++++++++++++++++++++++++++++
+ Kernel + +++++++++++++++++
+ + + KMS +
+ +++++++ +++++++++++ +++++++++++ + +++++++++++++++++
+ + ICE + + Storage + + QSEECom + + + ICE Driver +
+++++++++++++++++++++++++++++++++++++++++ <===> +++++++++++++++++
|| ||
|| ||
\/ \/
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Storage Device +
+ ++++++++++++++ +
+ + ICE HW + +
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Use Cases:
----------
a) Device bootup
ICE HW is detected during bootup time and corresponding probe function is
called. ICE driver parses its data from device tree node. ICE HW and storage
HW are tightly coupled. Storage device probing is dependent upon ICE device
probing. ICE driver configures all the required registers to put the ICE HW
in bypass mode.
b) Configuring keys
Currently, there are couple of use cases to configure the keys.
1) Full Disk Encryption(FDE)
System layer(VOLD) at invocation of apps layer would call libqseecom to create
the encryption key. Libqseecom calls qseecom driver to communicate with KMS
module on the secure side i.e. TZ. KMS would call ICE driver on the TZ side to
create and set the keys in ICE HW. At the end of transaction, VOLD would have
key index of key LUT where encryption key is present.
2) Per File Encryption (PFE)
Per File Manager(PFM) calls QSEECom api to create the key. PFM has a peer comp-
onent(PFT) at kernel layer which gets the corresponding key index from PFM.
Following are all the components involved in the ICE driver for data path:
+++++++++++++++++++++++++++++++++++++++++
+ App layer +
+++++++++++++++++++++++++++++++++++++++++
+ VFS +
+---------------------------------------+
+ File System (EXT4) +
+---------------------------------------+
+ Block Layer +
+ --------------------------------------+
+ +++++++ +
+ dm-req-crypt => + PFT + +
+ +++++++ +
+ +
+---------------------------------------+
+ +++++++++++ +++++++ +
+ + Storage + + ICE + +
+++++++++++++++++++++++++++++++++++++++++
+ || +
+ || (Storage Req with +
+ \/ ICE parameters ) +
+++++++++++++++++++++++++++++++++++++++++
+ Storage Device +
+ ++++++++++++++ +
+ + ICE HW + +
+++++++++++++++++++++++++++++++++++++++++
c) Data transaction
Once the crypto key has been configured, VOLD/PFM creates device mapping for
data partition. As part of device mapping VOLD passes key index, crypto
algorithm, mode and key length to dm layer. In case of PFE, keys are provided
by PFT as and when request is processed by dm-req-crypt. When any application
needs to read/write data, it would go through DM layer which would add crypto
information, provided by VOLD/PFT, to Request. For each Request, Storage driver
would ask ICE driver to configure crypto part of request. ICE driver extracts
crypto data from Request structure and provide it to storage driver which would
finally dispatch request to storage device.
d) Error Handling
Due to issue # 1 mentioned in "Known Issues", ICE driver does not register for
any interrupt. However, it enables sources of interrupt for ICE HW. After each
data transaction, Storage driver receives transaction completion event. As part
of event handling, storage driver calls ICE driver to check if any of ICE
interrupt status is set. If yes, storage driver returns error to upper layer.
Error handling would be changed in future chips.
Interfaces
==========
ICE driver exposes interfaces for storage driver to :
1. Get the global instance of ICE driver
2. Get the implemented interfaces of the particular ice instance
3. Initialize the ICE HW
4. Reset the ICE HW
5. Resume/Suspend the ICE HW
6. Get the Crypto configuration for the data request for storage
7. Check if current data transaction has generated any interrupt
Driver Parameters
=================
This driver is built and statically linked into the kernel; therefore,
there are no module parameters supported by this driver.
There are no kernel command line parameters supported by this driver.
Power Management
================
ICE driver does not do power management on its own as it is part of storage
hardware. Whenever storage driver receives request for power collapse/suspend
resume, it would call ICE driver which exposes APIs for Storage HW. ICE HW
during power collapse or reset, wipes crypto configuration data. When ICE
driver receives request to resume, it would ask ICE driver on TZ side to
restore the configuration. ICE driver does not do anything as part of power
collapse or suspend event.
Interface:
==========
ICE driver exposes following APIs for storage driver to use:
int (*init)(struct platform_device *, void *, ice_success_cb, ice_error_cb);
-- This function is invoked by storage controller during initialization of
storage controller. Storage controller would provide success and error call
backs which would be invoked asynchronously once ICE HW init is done.
int (*reset)(struct platform_device *);
-- ICE HW reset as part of storage controller reset. When storage controller
received reset command, it would call reset on ICE HW. As of now, ICE HW
does not need to do anything as part of reset.
int (*resume)(struct platform_device *);
-- ICE HW while going to reset, wipes all crypto keys and other data from ICE
HW. ICE driver would reconfigure those data as part of resume operation.
int (*suspend)(struct platform_device *);
-- This API would be called by storage driver when storage device is going to
suspend mode. As of today, ICE driver does not do anything to handle suspend.
int (*config)(struct platform_device *, struct request* , struct ice_data_setting*);
-- Storage driver would call this interface to get all crypto data required to
perform crypto operation.
int (*status)(struct platform_device *);
-- Storage driver would call this interface to check if previous data transfer
generated any error.
Config options
==============
This driver is enabled by the kernel config option CONFIG_CRYPTO_DEV_MSM_ICE.
Dependencies
============
ICE driver depends upon corresponding ICE driver on TZ side to function
appropriately.
Known Issues
============
1. ICE HW emits 0s even if it has generated an interrupt
This issue has significant impact on how ICE interrupts are handled. Currently,
ICE driver does not register for any of the ICE interrupts but enables the
sources of interrupt. Once storage driver asks to check the status of interrupt,
it reads and clears the clear status and provide read status to storage driver.
This mechanism though not optimal but prevents filesystem curruption.
This issue has been fixed in newer chips.
2. ICE HW wipes all crypto data during power collapse
This issue necessiate that ICE driver on TZ side store the crypto material
which is not required in the case of general purpose crypto engine.
This issue has been fixed in newer chips.
Further Improvements
====================
Currently, Due to PFE use case, ICE driver is dependent upon dm-req-crypt to
provide the keys as part of request structure. This couples ICE driver with
dm-req-crypt based solution. It is under discussion to expose an IOCTL based
and registration based interface APIs from ICE driver. ICE driver would use
these two interfaces to find out if any key exists for current request. If
yes, choose the right key index received from IOCTL or registration based
APIs. If not, dont set any crypto parameter in the request.

View File

@ -268,11 +268,9 @@ CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
CONFIG_DM_VERITY_FEC=y
@ -436,7 +434,6 @@ CONFIG_MMC_BLOCK_MINORS=32
CONFIG_MMC_BLOCK_DEFERRED_RESUME=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_SDHCI_MSM_ICE=y
CONFIG_MMC_SDHCI_MSM=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
@ -570,7 +567,6 @@ CONFIG_SDCARD_FS=y
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_LSM_MMAP_MIN_ADDR=4096
@ -588,7 +584,6 @@ CONFIG_CRYPTO_DEV_QCE=y
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y
CONFIG_FRAME_WARN=2048

View File

@ -283,11 +283,9 @@ CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
CONFIG_DM_VERITY_FEC=y
@ -471,7 +469,6 @@ CONFIG_MMC_BLOCK_DEFERRED_RESUME=y
CONFIG_MMC_IPC_LOGGING=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_SDHCI_MSM_ICE=y
CONFIG_MMC_SDHCI_MSM=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
@ -619,7 +616,6 @@ CONFIG_SDCARD_FS=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_LSM_MMAP_MIN_ADDR=4096
@ -637,7 +633,6 @@ CONFIG_CRYPTO_DEV_QCE=y
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_XZ_DEC=y
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y

View File

@ -288,11 +288,9 @@ CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
CONFIG_DM_VERITY_FEC=y
@ -466,7 +464,6 @@ CONFIG_MMC_BLOCK_MINORS=32
CONFIG_MMC_BLOCK_DEFERRED_RESUME=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_SDHCI_MSM_ICE=y
CONFIG_MMC_SDHCI_MSM=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
@ -610,7 +607,6 @@ CONFIG_SDCARD_FS=y
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@ -627,7 +623,6 @@ CONFIG_CRYPTO_DEV_QCE=y
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_STACK_HASH_ORDER_SHIFT=12
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y

View File

@ -298,12 +298,10 @@ CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
CONFIG_DM_VERITY_FEC=y
@ -479,7 +477,6 @@ CONFIG_MMC_BLOCK_DEFERRED_RESUME=y
CONFIG_MMC_IPC_LOGGING=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_SDHCI_MSM_ICE=y
CONFIG_MMC_SDHCI_MSM=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
@ -636,7 +633,6 @@ CONFIG_SDCARD_FS=y
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@ -653,7 +649,6 @@ CONFIG_CRYPTO_DEV_QCE=y
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DEBUG_CONSOLE_UNHASHED_POINTERS=y

View File

@ -294,11 +294,9 @@ CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
@ -667,7 +665,6 @@ CONFIG_ECRYPT_FS_MESSAGING=y
CONFIG_SDCARD_FS=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@ -685,7 +682,6 @@ CONFIG_CRYPTO_ANSI_CPRNG=y
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_FS=y

View File

@ -308,12 +308,10 @@ CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
@ -701,7 +699,6 @@ CONFIG_SDCARD_FS=y
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@ -720,7 +717,6 @@ CONFIG_CRYPTO_ANSI_CPRNG=y
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_XZ_DEC=y
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y

View File

@ -297,11 +297,8 @@ CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
@ -651,9 +648,9 @@ CONFIG_SENSORS_SSC=y
CONFIG_QCOM_KGSL=y
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_SECURITY=y
CONFIG_EXT4_ENCRYPTION=y
CONFIG_F2FS_FS=y
CONFIG_F2FS_FS_SECURITY=y
CONFIG_FS_ENCRYPTION=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_QFMT_V2=y
@ -668,7 +665,6 @@ CONFIG_ECRYPT_FS_MESSAGING=y
CONFIG_SDCARD_FS=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@ -686,7 +682,6 @@ CONFIG_CRYPTO_ANSI_CPRNG=y
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_FS=y

View File

@ -311,12 +311,9 @@ CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
@ -703,7 +700,6 @@ CONFIG_SDCARD_FS=y
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@ -722,7 +718,6 @@ CONFIG_CRYPTO_ANSI_CPRNG=y
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_XZ_DEC=y
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y

View File

@ -290,11 +290,9 @@ CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
@ -488,7 +486,6 @@ CONFIG_MMC_BLOCK_DEFERRED_RESUME=y
CONFIG_MMC_TEST=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_SDHCI_MSM_ICE=y
CONFIG_MMC_SDHCI_MSM=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
@ -651,7 +648,6 @@ CONFIG_SDCARD_FS=y
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@ -667,7 +663,6 @@ CONFIG_CRYPTO_ANSI_CPRNG=y
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_STACK_HASH_ORDER_SHIFT=12
CONFIG_PRINTK_TIME=y
CONFIG_DEBUG_INFO=y

View File

@ -296,12 +296,10 @@ CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_SCSI_UFSHCD=y
CONFIG_SCSI_UFSHCD_PLATFORM=y
CONFIG_SCSI_UFS_QCOM=y
CONFIG_SCSI_UFS_QCOM_ICE=y
CONFIG_SCSI_UFSHCD_CMD_LOGGING=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_DM_DEFAULT_KEY=y
CONFIG_DM_SNAPSHOT=y
CONFIG_DM_UEVENT=y
CONFIG_DM_VERITY=y
@ -497,7 +495,6 @@ CONFIG_MMC_TEST=y
CONFIG_MMC_IPC_LOGGING=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
CONFIG_MMC_SDHCI_MSM_ICE=y
CONFIG_MMC_SDHCI_MSM=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
@ -672,7 +669,6 @@ CONFIG_SDCARD_FS=y
# CONFIG_NETWORK_FILESYSTEMS is not set
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PFK=y
CONFIG_SECURITY_PERF_EVENTS_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_HARDENED_USERCOPY=y
@ -688,7 +684,6 @@ CONFIG_CRYPTO_ANSI_CPRNG=y
CONFIG_CRYPTO_DEV_QCOM_MSM_QCE=y
CONFIG_CRYPTO_DEV_QCRYPTO=y
CONFIG_CRYPTO_DEV_QCEDEV=y
CONFIG_CRYPTO_DEV_QCOM_ICE=y
CONFIG_XZ_DEC=y
CONFIG_PRINTK_TIME=y
CONFIG_DYNAMIC_DEBUG=y

View File

@ -580,19 +580,6 @@ inline int bio_phys_segments(struct request_queue *q, struct bio *bio)
}
EXPORT_SYMBOL(bio_phys_segments);
inline void bio_clone_crypt_key(struct bio *dst, const struct bio *src)
{
#ifdef CONFIG_PFK
dst->bi_iter.bi_dun = src->bi_iter.bi_dun;
#ifdef CONFIG_DM_DEFAULT_KEY
dst->bi_crypt_key = src->bi_crypt_key;
dst->bi_crypt_skip = src->bi_crypt_skip;
#endif
dst->bi_dio_inode = src->bi_dio_inode;
#endif
}
EXPORT_SYMBOL(bio_clone_crypt_key);
/**
* __bio_clone_fast - clone a bio that shares the original bio's biovec
* @bio: destination bio
@ -622,7 +609,7 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src)
bio->bi_write_hint = bio_src->bi_write_hint;
bio->bi_iter = bio_src->bi_iter;
bio->bi_io_vec = bio_src->bi_io_vec;
bio_clone_crypt_key(bio, bio_src);
bio_clone_blkcg_association(bio, bio_src);
}
EXPORT_SYMBOL(__bio_clone_fast);

View File

@ -1610,9 +1610,6 @@ static struct request *blk_old_get_request(struct request_queue *q,
/* q->queue_lock is unlocked at this point */
rq->__data_len = 0;
rq->__sector = (sector_t) -1;
#ifdef CONFIG_PFK
rq->__dun = 0;
#endif
rq->bio = rq->biotail = NULL;
return rq;
}
@ -1845,9 +1842,6 @@ bool bio_attempt_front_merge(struct request_queue *q, struct request *req,
bio->bi_next = req->bio;
req->bio = bio;
#ifdef CONFIG_PFK
req->__dun = bio->bi_iter.bi_dun;
#endif
req->__sector = bio->bi_iter.bi_sector;
req->__data_len += bio->bi_iter.bi_size;
req->ioprio = ioprio_best(req->ioprio, bio_prio(bio));
@ -1997,9 +1991,6 @@ void blk_init_request_from_bio(struct request *req, struct bio *bio)
else
req->ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
req->write_hint = bio->bi_write_hint;
#ifdef CONFIG_PFK
req->__dun = bio->bi_iter.bi_dun;
#endif
blk_rq_bio_prep(req->q, req, bio);
}
EXPORT_SYMBOL_GPL(blk_init_request_from_bio);
@ -3161,13 +3152,8 @@ bool blk_update_request(struct request *req, blk_status_t error,
req->__data_len -= total_bytes;
/* update sector only for requests with clear definition of sector */
if (!blk_rq_is_passthrough(req)) {
if (!blk_rq_is_passthrough(req))
req->__sector += total_bytes >> 9;
#ifdef CONFIG_PFK
if (req->__dun)
req->__dun += total_bytes >> 12;
#endif
}
/* mixed attributes always follow the first bio */
if (req->rq_flags & RQF_MIXED_MERGE) {
@ -3531,9 +3517,6 @@ static void __blk_rq_prep_clone(struct request *dst, struct request *src)
{
dst->cpu = src->cpu;
dst->__sector = blk_rq_pos(src);
#ifdef CONFIG_PFK
dst->__dun = blk_rq_dun(src);
#endif
dst->__data_len = blk_rq_bytes(src);
if (src->rq_flags & RQF_SPECIAL_PAYLOAD) {
dst->rq_flags |= RQF_SPECIAL_PAYLOAD;

View File

@ -9,7 +9,7 @@
#include <linux/scatterlist.h>
#include <trace/events/block.h>
#include <linux/pfk.h>
#include "blk.h"
static struct bio *blk_bio_discard_split(struct request_queue *q,
@ -515,8 +515,6 @@ int ll_back_merge_fn(struct request_queue *q, struct request *req,
if (blk_integrity_rq(req) &&
integrity_req_gap_back_merge(req, bio))
return 0;
if (blk_try_merge(req, bio) != ELEVATOR_BACK_MERGE)
return 0;
if (blk_rq_sectors(req) + bio_sectors(bio) >
blk_rq_get_max_sectors(req, blk_rq_pos(req))) {
req_set_nomerge(q, req);
@ -539,8 +537,6 @@ int ll_front_merge_fn(struct request_queue *q, struct request *req,
if (blk_integrity_rq(req) &&
integrity_req_gap_front_merge(req, bio))
return 0;
if (blk_try_merge(req, bio) != ELEVATOR_FRONT_MERGE)
return 0;
if (blk_rq_sectors(req) + bio_sectors(bio) >
blk_rq_get_max_sectors(req, bio->bi_iter.bi_sector)) {
req_set_nomerge(q, req);
@ -674,11 +670,6 @@ static void blk_account_io_merge(struct request *req)
}
}
static bool crypto_not_mergeable(const struct bio *bio, const struct bio *nxt)
{
return (!pfk_allow_merge_bio(bio, nxt));
}
/*
* For non-mq, this has to be called with the request spinlock acquired.
* For mq with scheduling, the appropriate queue wide lock should be held.
@ -717,9 +708,6 @@ static struct request *attempt_merge(struct request_queue *q,
if (req->write_hint != next->write_hint)
return NULL;
if (crypto_not_mergeable(req->bio, next->bio))
return 0;
/*
* If we are allowed to merge, then append bio list
* from next to rq and release next. merge_requests_fn
@ -859,15 +847,9 @@ enum elv_merge blk_try_merge(struct request *rq, struct bio *bio)
queue_max_discard_segments(rq->q) > 1)
return ELEVATOR_DISCARD_MERGE;
else if (blk_rq_pos(rq) + blk_rq_sectors(rq) ==
bio->bi_iter.bi_sector) {
if (crypto_not_mergeable(rq->bio, bio))
return ELEVATOR_NO_MERGE;
bio->bi_iter.bi_sector)
return ELEVATOR_BACK_MERGE;
} else if (blk_rq_pos(rq) - bio_sectors(bio) ==
bio->bi_iter.bi_sector) {
if (crypto_not_mergeable(bio, rq->bio))
return ELEVATOR_NO_MERGE;
else if (blk_rq_pos(rq) - bio_sectors(bio) == bio->bi_iter.bi_sector)
return ELEVATOR_FRONT_MERGE;
}
return ELEVATOR_NO_MERGE;
}

View File

@ -55,6 +55,24 @@ static inline void queue_lockdep_assert_held(struct request_queue *q)
lockdep_assert_held(q->queue_lock);
}
static inline void queue_flag_set_unlocked(unsigned int flag,
struct request_queue *q)
{
if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) &&
kref_read(&q->kobj.kref))
lockdep_assert_held(q->queue_lock);
__set_bit(flag, &q->queue_flags);
}
static inline void queue_flag_clear_unlocked(unsigned int flag,
struct request_queue *q)
{
if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) &&
kref_read(&q->kobj.kref))
lockdep_assert_held(q->queue_lock);
__clear_bit(flag, &q->queue_flags);
}
static inline int queue_flag_test_and_clear(unsigned int flag,
struct request_queue *q)
{

View File

@ -277,7 +277,6 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
}
}
bio_clone_crypt_key(bio, bio_src);
bio_clone_blkcg_association(bio, bio_src);
return bio;

View File

@ -422,7 +422,7 @@ enum elv_merge elv_merge(struct request_queue *q, struct request **req,
{
struct elevator_queue *e = q->elevator;
struct request *__rq;
enum elv_merge ret;
/*
* Levels of merges:
* nomerges: No merges at all attempted
@ -435,11 +435,9 @@ enum elv_merge elv_merge(struct request_queue *q, struct request **req,
/*
* First try one-hit cache.
*/
if (q->last_merge) {
if (!elv_bio_merge_ok(q->last_merge, bio))
return ELEVATOR_NO_MERGE;
if (q->last_merge && elv_bio_merge_ok(q->last_merge, bio)) {
enum elv_merge ret = blk_try_merge(q->last_merge, bio);
ret = blk_try_merge(q->last_merge, bio);
if (ret != ELEVATOR_NO_MERGE) {
*req = q->last_merge;
return ret;

View File

@ -804,8 +804,4 @@ config CRYPTO_DEV_CCREE
source "drivers/crypto/hisilicon/Kconfig"
if ARCH_QCOM
source drivers/crypto/msm/Kconfig
endif
endif # CRYPTO_HW

View File

@ -21,7 +21,6 @@ obj-$(CONFIG_CRYPTO_DEV_MXS_DCP) += mxs-dcp.o
obj-$(CONFIG_CRYPTO_DEV_MXC_SCC) += mxc-scc.o
obj-$(CONFIG_CRYPTO_DEV_NIAGARA2) += n2_crypto.o
n2_crypto-y := n2_core.o n2_asm.o
obj-$(CONFIG_CRYPTO_DEV_QCOM_ICE) += msm/
obj-$(CONFIG_CRYPTO_DEV_NX) += nx/
obj-$(CONFIG_CRYPTO_DEV_OMAP) += omap-crypto.o
obj-$(CONFIG_CRYPTO_DEV_OMAP_AES) += omap-aes-driver.o

View File

@ -1,10 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
config CRYPTO_DEV_QCOM_ICE
tristate "Inline Crypto Module"
default n
depends on BLK_DEV_DM
help
This driver supports Inline Crypto Engine for QTI chipsets, MSM8994
and later, to accelerate crypto operations for storage needs.
To compile this driver as a module, choose M here: the
module will be called ice.

View File

@ -4,4 +4,3 @@ obj-$(CONFIG_CRYPTO_DEV_QCEDEV) += qcedev.o
obj-$(CONFIG_CRYPTO_DEV_QCEDEV) += qcedev_smmu.o
obj-$(CONFIG_CRYPTO_DEV_QCRYPTO) += qcrypto.o
obj-$(CONFIG_CRYPTO_DEV_OTA_CRYPTO) += ota_crypto.o
obj-$(CONFIG_CRYPTO_DEV_QCOM_ICE) += ice.o

File diff suppressed because it is too large Load Diff

View File

@ -1,151 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
*/
#ifndef _QCOM_INLINE_CRYPTO_ENGINE_REGS_H_
#define _QCOM_INLINE_CRYPTO_ENGINE_REGS_H_
/* Register bits for ICE version */
#define ICE_CORE_CURRENT_MAJOR_VERSION 0x03
#define ICE_CORE_STEP_REV_MASK 0xFFFF
#define ICE_CORE_STEP_REV 0 /* bit 15-0 */
#define ICE_CORE_MAJOR_REV_MASK 0xFF000000
#define ICE_CORE_MAJOR_REV 24 /* bit 31-24 */
#define ICE_CORE_MINOR_REV_MASK 0xFF0000
#define ICE_CORE_MINOR_REV 16 /* bit 23-16 */
#define ICE_BIST_STATUS_MASK (0xF0000000) /* bits 28-31 */
#define ICE_FUSE_SETTING_MASK 0x1
#define ICE_FORCE_HW_KEY0_SETTING_MASK 0x2
#define ICE_FORCE_HW_KEY1_SETTING_MASK 0x4
/* QCOM ICE Registers from SWI */
#define QCOM_ICE_REGS_CONTROL 0x0000
#define QCOM_ICE_REGS_RESET 0x0004
#define QCOM_ICE_REGS_VERSION 0x0008
#define QCOM_ICE_REGS_FUSE_SETTING 0x0010
#define QCOM_ICE_REGS_PARAMETERS_1 0x0014
#define QCOM_ICE_REGS_PARAMETERS_2 0x0018
#define QCOM_ICE_REGS_PARAMETERS_3 0x001C
#define QCOM_ICE_REGS_PARAMETERS_4 0x0020
#define QCOM_ICE_REGS_PARAMETERS_5 0x0024
/* QCOM ICE v3.X only */
#define QCOM_ICE_GENERAL_ERR_STTS 0x0040
#define QCOM_ICE_INVALID_CCFG_ERR_STTS 0x0030
#define QCOM_ICE_GENERAL_ERR_MASK 0x0044
/* QCOM ICE v2.X only */
#define QCOM_ICE_REGS_NON_SEC_IRQ_STTS 0x0040
#define QCOM_ICE_REGS_NON_SEC_IRQ_MASK 0x0044
#define QCOM_ICE_REGS_NON_SEC_IRQ_CLR 0x0048
#define QCOM_ICE_REGS_STREAM1_ERROR_SYNDROME1 0x0050
#define QCOM_ICE_REGS_STREAM1_ERROR_SYNDROME2 0x0054
#define QCOM_ICE_REGS_STREAM2_ERROR_SYNDROME1 0x0058
#define QCOM_ICE_REGS_STREAM2_ERROR_SYNDROME2 0x005C
#define QCOM_ICE_REGS_STREAM1_BIST_ERROR_VEC 0x0060
#define QCOM_ICE_REGS_STREAM2_BIST_ERROR_VEC 0x0064
#define QCOM_ICE_REGS_STREAM1_BIST_FINISH_VEC 0x0068
#define QCOM_ICE_REGS_STREAM2_BIST_FINISH_VEC 0x006C
#define QCOM_ICE_REGS_BIST_STATUS 0x0070
#define QCOM_ICE_REGS_BYPASS_STATUS 0x0074
#define QCOM_ICE_REGS_ADVANCED_CONTROL 0x1000
#define QCOM_ICE_REGS_ENDIAN_SWAP 0x1004
#define QCOM_ICE_REGS_TEST_BUS_CONTROL 0x1010
#define QCOM_ICE_REGS_TEST_BUS_REG 0x1014
#define QCOM_ICE_REGS_STREAM1_COUNTERS1 0x1100
#define QCOM_ICE_REGS_STREAM1_COUNTERS2 0x1104
#define QCOM_ICE_REGS_STREAM1_COUNTERS3 0x1108
#define QCOM_ICE_REGS_STREAM1_COUNTERS4 0x110C
#define QCOM_ICE_REGS_STREAM1_COUNTERS5_MSB 0x1110
#define QCOM_ICE_REGS_STREAM1_COUNTERS5_LSB 0x1114
#define QCOM_ICE_REGS_STREAM1_COUNTERS6_MSB 0x1118
#define QCOM_ICE_REGS_STREAM1_COUNTERS6_LSB 0x111C
#define QCOM_ICE_REGS_STREAM1_COUNTERS7_MSB 0x1120
#define QCOM_ICE_REGS_STREAM1_COUNTERS7_LSB 0x1124
#define QCOM_ICE_REGS_STREAM1_COUNTERS8_MSB 0x1128
#define QCOM_ICE_REGS_STREAM1_COUNTERS8_LSB 0x112C
#define QCOM_ICE_REGS_STREAM1_COUNTERS9_MSB 0x1130
#define QCOM_ICE_REGS_STREAM1_COUNTERS9_LSB 0x1134
#define QCOM_ICE_REGS_STREAM2_COUNTERS1 0x1200
#define QCOM_ICE_REGS_STREAM2_COUNTERS2 0x1204
#define QCOM_ICE_REGS_STREAM2_COUNTERS3 0x1208
#define QCOM_ICE_REGS_STREAM2_COUNTERS4 0x120C
#define QCOM_ICE_REGS_STREAM2_COUNTERS5_MSB 0x1210
#define QCOM_ICE_REGS_STREAM2_COUNTERS5_LSB 0x1214
#define QCOM_ICE_REGS_STREAM2_COUNTERS6_MSB 0x1218
#define QCOM_ICE_REGS_STREAM2_COUNTERS6_LSB 0x121C
#define QCOM_ICE_REGS_STREAM2_COUNTERS7_MSB 0x1220
#define QCOM_ICE_REGS_STREAM2_COUNTERS7_LSB 0x1224
#define QCOM_ICE_REGS_STREAM2_COUNTERS8_MSB 0x1228
#define QCOM_ICE_REGS_STREAM2_COUNTERS8_LSB 0x122C
#define QCOM_ICE_REGS_STREAM2_COUNTERS9_MSB 0x1230
#define QCOM_ICE_REGS_STREAM2_COUNTERS9_LSB 0x1234
#define QCOM_ICE_STREAM1_PREMATURE_LBA_CHANGE (1L << 0)
#define QCOM_ICE_STREAM2_PREMATURE_LBA_CHANGE (1L << 1)
#define QCOM_ICE_STREAM1_NOT_EXPECTED_LBO (1L << 2)
#define QCOM_ICE_STREAM2_NOT_EXPECTED_LBO (1L << 3)
#define QCOM_ICE_STREAM1_NOT_EXPECTED_DUN (1L << 4)
#define QCOM_ICE_STREAM2_NOT_EXPECTED_DUN (1L << 5)
#define QCOM_ICE_STREAM1_NOT_EXPECTED_DUS (1L << 6)
#define QCOM_ICE_STREAM2_NOT_EXPECTED_DUS (1L << 7)
#define QCOM_ICE_STREAM1_NOT_EXPECTED_DBO (1L << 8)
#define QCOM_ICE_STREAM2_NOT_EXPECTED_DBO (1L << 9)
#define QCOM_ICE_STREAM1_NOT_EXPECTED_ENC_SEL (1L << 10)
#define QCOM_ICE_STREAM2_NOT_EXPECTED_ENC_SEL (1L << 11)
#define QCOM_ICE_STREAM1_NOT_EXPECTED_CONF_IDX (1L << 12)
#define QCOM_ICE_STREAM2_NOT_EXPECTED_CONF_IDX (1L << 13)
#define QCOM_ICE_STREAM1_NOT_EXPECTED_NEW_TRNS (1L << 14)
#define QCOM_ICE_STREAM2_NOT_EXPECTED_NEW_TRNS (1L << 15)
#define QCOM_ICE_NON_SEC_IRQ_MASK \
(QCOM_ICE_STREAM1_PREMATURE_LBA_CHANGE |\
QCOM_ICE_STREAM2_PREMATURE_LBA_CHANGE |\
QCOM_ICE_STREAM1_NOT_EXPECTED_LBO |\
QCOM_ICE_STREAM2_NOT_EXPECTED_LBO |\
QCOM_ICE_STREAM1_NOT_EXPECTED_DUN |\
QCOM_ICE_STREAM2_NOT_EXPECTED_DUN |\
QCOM_ICE_STREAM2_NOT_EXPECTED_DUS |\
QCOM_ICE_STREAM1_NOT_EXPECTED_DBO |\
QCOM_ICE_STREAM2_NOT_EXPECTED_DBO |\
QCOM_ICE_STREAM1_NOT_EXPECTED_ENC_SEL |\
QCOM_ICE_STREAM2_NOT_EXPECTED_ENC_SEL |\
QCOM_ICE_STREAM1_NOT_EXPECTED_CONF_IDX |\
QCOM_ICE_STREAM1_NOT_EXPECTED_NEW_TRNS |\
QCOM_ICE_STREAM2_NOT_EXPECTED_NEW_TRNS)
/* QCOM ICE registers from secure side */
#define QCOM_ICE_TEST_BUS_REG_SECURE_INTR (1L << 28)
#define QCOM_ICE_TEST_BUS_REG_NON_SECURE_INTR (1L << 2)
#define QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_STTS 0x2050
#define QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_MASK 0x2054
#define QCOM_ICE_LUT_KEYS_ICE_SEC_IRQ_CLR 0x2058
#define QCOM_ICE_STREAM1_PARTIALLY_SET_KEY_USED (1L << 0)
#define QCOM_ICE_STREAM2_PARTIALLY_SET_KEY_USED (1L << 1)
#define QCOM_ICE_QCOMC_DBG_OPEN_EVENT (1L << 30)
#define QCOM_ICE_KEYS_RAM_RESET_COMPLETED (1L << 31)
#define QCOM_ICE_SEC_IRQ_MASK \
(QCOM_ICE_STREAM1_PARTIALLY_SET_KEY_USED |\
QCOM_ICE_STREAM2_PARTIALLY_SET_KEY_USED |\
QCOM_ICE_QCOMC_DBG_OPEN_EVENT | \
QCOM_ICE_KEYS_RAM_RESET_COMPLETED)
#define qcom_ice_writel(ice, val, reg) \
writel_relaxed((val), (ice)->mmio + (reg))
#define qcom_ice_readl(ice, reg) \
readl_relaxed((ice)->mmio + (reg))
#endif /* _QCOM_INLINE_CRYPTO_ENGINE_REGS_H_ */

View File

@ -294,23 +294,6 @@ config DM_CRYPT
If unsure, say N.
config DM_DEFAULT_KEY
tristate "Default-key crypt target support"
depends on BLK_DEV_DM
depends on PFK
---help---
This (currently Android-specific) device-mapper target allows you to
create a device that assigns a default encryption key to bios that
don't already have one. This can sit between inline cryptographic
acceleration hardware and filesystems that use it. This ensures a
default key is used when the filesystem doesn't explicitly specify a
key, such as for filesystem metadata, leaving no sectors unencrypted.
To compile this code as a module, choose M here: the module will be
called dm-default-key.
If unsure, say N.
config DM_SNAPSHOT
tristate "Snapshot target"
depends on BLK_DEV_DM

View File

@ -47,7 +47,6 @@ obj-$(CONFIG_DM_UNSTRIPED) += dm-unstripe.o
obj-$(CONFIG_DM_BUFIO) += dm-bufio.o
obj-$(CONFIG_DM_BIO_PRISON) += dm-bio-prison.o
obj-$(CONFIG_DM_CRYPT) += dm-crypt.o
obj-$(CONFIG_DM_DEFAULT_KEY) += dm-default-key.o
obj-$(CONFIG_DM_DELAY) += dm-delay.o
obj-$(CONFIG_DM_FLAKEY) += dm-flakey.o
obj-$(CONFIG_DM_MULTIPATH) += dm-multipath.o dm-round-robin.o

View File

@ -125,8 +125,7 @@ struct iv_tcw_private {
* and encrypts / decrypts at the same time.
*/
enum flags { DM_CRYPT_SUSPENDED, DM_CRYPT_KEY_VALID,
DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD,
DM_CRYPT_ENCRYPT_OVERRIDE };
DM_CRYPT_SAME_CPU, DM_CRYPT_NO_OFFLOAD };
enum cipher_flags {
CRYPT_MODE_INTEGRITY_AEAD, /* Use authenticated mode for cihper */
@ -2665,8 +2664,6 @@ static int crypt_ctr_optional(struct dm_target *ti, unsigned int argc, char **ar
cc->sector_shift = __ffs(cc->sector_size) - SECTOR_SHIFT;
} else if (!strcasecmp(opt_string, "iv_large_sectors"))
set_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags);
else if (!strcasecmp(opt_string, "allow_encrypt_override"))
set_bit(DM_CRYPT_ENCRYPT_OVERRIDE, &cc->flags);
else {
ti->error = "Invalid feature arguments";
return -EINVAL;
@ -2872,15 +2869,12 @@ static int crypt_map(struct dm_target *ti, struct bio *bio)
struct crypt_config *cc = ti->private;
/*
* If bio is REQ_PREFLUSH, REQ_NOENCRYPT, or REQ_OP_DISCARD,
* just bypass crypt queues.
* If bio is REQ_PREFLUSH or REQ_OP_DISCARD, just bypass crypt queues.
* - for REQ_PREFLUSH device-mapper core ensures that no IO is in-flight
* - for REQ_OP_DISCARD caller must use flush if IO ordering matters
*/
if (unlikely(bio->bi_opf & REQ_PREFLUSH) ||
(unlikely(bio->bi_opf & REQ_NOENCRYPT) &&
test_bit(DM_CRYPT_ENCRYPT_OVERRIDE, &cc->flags)) ||
bio_op(bio) == REQ_OP_DISCARD) {
if (unlikely(bio->bi_opf & REQ_PREFLUSH ||
bio_op(bio) == REQ_OP_DISCARD)) {
bio_set_dev(bio, cc->dev->bdev);
if (bio_sectors(bio))
bio->bi_iter.bi_sector = cc->start +
@ -2967,8 +2961,6 @@ static void crypt_status(struct dm_target *ti, status_type_t type,
num_feature_args += test_bit(DM_CRYPT_NO_OFFLOAD, &cc->flags);
num_feature_args += cc->sector_size != (1 << SECTOR_SHIFT);
num_feature_args += test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags);
num_feature_args += test_bit(DM_CRYPT_ENCRYPT_OVERRIDE,
&cc->flags);
if (cc->on_disk_tag_size)
num_feature_args++;
if (num_feature_args) {
@ -2985,8 +2977,6 @@ static void crypt_status(struct dm_target *ti, status_type_t type,
DMEMIT(" sector_size:%d", cc->sector_size);
if (test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags))
DMEMIT(" iv_large_sectors");
if (test_bit(DM_CRYPT_ENCRYPT_OVERRIDE, &cc->flags))
DMEMIT(" allow_encrypt_override");
}
break;

View File

@ -1,224 +0,0 @@
/*
* Copyright (C) 2017 Google, Inc.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
* may be copied, distributed, and modified under those terms.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/device-mapper.h>
#include <linux/module.h>
#include <linux/pfk.h>
#define DM_MSG_PREFIX "default-key"
struct default_key_c {
struct dm_dev *dev;
sector_t start;
struct blk_encryption_key key;
};
static void default_key_dtr(struct dm_target *ti)
{
struct default_key_c *dkc = ti->private;
if (dkc->dev)
dm_put_device(ti, dkc->dev);
kzfree(dkc);
}
/*
* Construct a default-key mapping: <mode> <key> <dev_path> <start>
*/
static int default_key_ctr(struct dm_target *ti, unsigned int argc, char **argv)
{
struct default_key_c *dkc;
size_t key_size;
unsigned long long tmp;
char dummy;
int err;
if (argc != 4) {
ti->error = "Invalid argument count";
return -EINVAL;
}
dkc = kzalloc(sizeof(*dkc), GFP_KERNEL);
if (!dkc) {
ti->error = "Out of memory";
return -ENOMEM;
}
ti->private = dkc;
if (strcmp(argv[0], "AES-256-XTS") != 0) {
ti->error = "Unsupported encryption mode";
err = -EINVAL;
goto bad;
}
key_size = strlen(argv[1]);
if (key_size != 2 * BLK_ENCRYPTION_KEY_SIZE_AES_256_XTS) {
ti->error = "Unsupported key size";
err = -EINVAL;
goto bad;
}
key_size /= 2;
if (hex2bin(dkc->key.raw, argv[1], key_size) != 0) {
ti->error = "Malformed key string";
err = -EINVAL;
goto bad;
}
err = dm_get_device(ti, argv[2], dm_table_get_mode(ti->table),
&dkc->dev);
if (err) {
ti->error = "Device lookup failed";
goto bad;
}
if (sscanf(argv[3], "%llu%c", &tmp, &dummy) != 1) {
ti->error = "Invalid start sector";
err = -EINVAL;
goto bad;
}
dkc->start = tmp;
if (!blk_queue_inlinecrypt(bdev_get_queue(dkc->dev->bdev))) {
ti->error = "Device does not support inline encryption";
err = -EINVAL;
goto bad;
}
/* Pass flush requests through to the underlying device. */
ti->num_flush_bios = 1;
/*
* We pass discard requests through to the underlying device, although
* the discarded blocks will be zeroed, which leaks information about
* unused blocks. It's also impossible for dm-default-key to know not
* to decrypt discarded blocks, so they will not be read back as zeroes
* and we must set discard_zeroes_data_unsupported.
*/
ti->num_discard_bios = 1;
/*
* It's unclear whether WRITE_SAME would work with inline encryption; it
* would depend on whether the hardware duplicates the data before or
* after encryption. But since the internal storage in some devices
* (MSM8998-based) doesn't claim to support WRITE_SAME anyway, we don't
* currently have a way to test it. Leave it disabled it for now.
*/
/*ti->num_write_same_bios = 1;*/
return 0;
bad:
default_key_dtr(ti);
return err;
}
static int default_key_map(struct dm_target *ti, struct bio *bio)
{
const struct default_key_c *dkc = ti->private;
bio_set_dev(bio, dkc->dev->bdev);
if (bio_sectors(bio)) {
bio->bi_iter.bi_sector = dkc->start +
dm_target_offset(ti, bio->bi_iter.bi_sector);
}
if (!bio->bi_crypt_key && !bio->bi_crypt_skip)
bio->bi_crypt_key = &dkc->key;
return DM_MAPIO_REMAPPED;
}
static void default_key_status(struct dm_target *ti, status_type_t type,
unsigned int status_flags, char *result,
unsigned int maxlen)
{
const struct default_key_c *dkc = ti->private;
unsigned int sz = 0;
switch (type) {
case STATUSTYPE_INFO:
result[0] = '\0';
break;
case STATUSTYPE_TABLE:
/* encryption mode */
DMEMIT("AES-256-XTS");
/* reserved for key; dm-crypt shows it, but we don't for now */
DMEMIT(" -");
/* name of underlying device, and the start sector in it */
DMEMIT(" %s %llu", dkc->dev->name,
(unsigned long long)dkc->start);
break;
}
}
static int default_key_prepare_ioctl(struct dm_target *ti,
struct block_device **bdev)
{
struct default_key_c *dkc = ti->private;
struct dm_dev *dev = dkc->dev;
*bdev = dev->bdev;
/*
* Only pass ioctls through if the device sizes match exactly.
*/
if (dkc->start ||
ti->len != i_size_read(dev->bdev->bd_inode) >> SECTOR_SHIFT)
return 1;
return 0;
}
static int default_key_iterate_devices(struct dm_target *ti,
iterate_devices_callout_fn fn,
void *data)
{
struct default_key_c *dkc = ti->private;
return fn(ti, dkc->dev, dkc->start, ti->len, data);
}
static struct target_type default_key_target = {
.name = "default-key",
.version = {1, 0, 0},
.module = THIS_MODULE,
.ctr = default_key_ctr,
.dtr = default_key_dtr,
.map = default_key_map,
.status = default_key_status,
.prepare_ioctl = default_key_prepare_ioctl,
.iterate_devices = default_key_iterate_devices,
};
static int __init dm_default_key_init(void)
{
return dm_register_target(&default_key_target);
}
static void __exit dm_default_key_exit(void)
{
dm_unregister_target(&default_key_target);
}
module_init(dm_default_key_init);
module_exit(dm_default_key_exit);
MODULE_AUTHOR("Paul Lawrence <paullawrence@google.com>");
MODULE_AUTHOR("Paul Crowley <paulcrowley@google.com>");
MODULE_AUTHOR("Eric Biggers <ebiggers@google.com>");
MODULE_DESCRIPTION(DM_NAME " target for encrypting filesystem metadata");
MODULE_LICENSE("GPL v2");

View File

@ -1730,16 +1730,6 @@ static int queue_supports_sg_merge(struct dm_target *ti, struct dm_dev *dev,
return q && !test_bit(QUEUE_FLAG_NO_SG_MERGE, &q->queue_flags);
}
static int queue_supports_inline_encryption(struct dm_target *ti,
struct dm_dev *dev,
sector_t start, sector_t len,
void *data)
{
struct request_queue *q = bdev_get_queue(dev->bdev);
return q && blk_queue_inlinecrypt(q);
}
static bool dm_table_all_devices_attribute(struct dm_table *t,
iterate_devices_callout_fn func)
{
@ -1971,11 +1961,6 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
else
blk_queue_flag_set(QUEUE_FLAG_NO_SG_MERGE, q);
if (dm_table_all_devices_attribute(t, queue_supports_inline_encryption))
queue_flag_set_unlocked(QUEUE_FLAG_INLINECRYPT, q);
else
queue_flag_clear_unlocked(QUEUE_FLAG_INLINECRYPT, q);
dm_table_verify_integrity(t);
/*

View File

@ -1574,7 +1574,6 @@ static int mmc_blk_cqe_issue_rw_rq(struct mmc_queue *mq, struct request *req)
int err = 0;
mmc_blk_data_prep(mq, mqrq, 0, NULL, NULL);
mqrq->brq.mrq.req = req;
mmc_deferred_scaling(mq->card->host);
mmc_cqe_clk_scaling_start_busy(mq, mq->card->host, true);
@ -2209,7 +2208,6 @@ static int mmc_blk_mq_issue_rw_rq(struct mmc_queue *mq,
mmc_blk_rw_rq_prep(mqrq, mq->card, 0, mq);
mqrq->brq.mrq.done = mmc_blk_mq_req_done;
mqrq->brq.mrq.req = req;
mmc_pre_req(host, &mqrq->brq.mrq);

View File

@ -385,8 +385,6 @@ static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512));
blk_queue_max_segments(mq->queue, host->max_segs);
if (host->inlinecrypt_support)
queue_flag_set_unlocked(QUEUE_FLAG_INLINECRYPT, mq->queue);
if (host->ops->init)
host->ops->init(host);

View File

@ -151,17 +151,6 @@ config MMC_SDHCI_OF_AT91
help
This selects the Atmel SDMMC driver
config MMC_SDHCI_MSM_ICE
bool "Qualcomm Technologies, Inc Inline Crypto Engine for SDHCI core"
depends on MMC_SDHCI_MSM && CRYPTO_DEV_QCOM_ICE
help
This selects the QTI specific additions to support Inline Crypto
Engine (ICE). ICE accelerates the crypto operations and maintains
the high SDHCI performance.
Select this if you have ICE supported for SDHCI on QTI chipset.
If unsure, say N.
config MMC_SDHCI_OF_ESDHC
tristate "SDHCI OF support for the Freescale eSDHC controller"
depends on MMC_SDHCI_PLTFM

View File

@ -87,7 +87,6 @@ obj-$(CONFIG_MMC_SDHCI_OF_DWCMSHC) += sdhci-of-dwcmshc.o
obj-$(CONFIG_MMC_SDHCI_BCM_KONA) += sdhci-bcm-kona.o
obj-$(CONFIG_MMC_SDHCI_IPROC) += sdhci-iproc.o
obj-$(CONFIG_MMC_SDHCI_MSM) += sdhci-msm.o
obj-$(CONFIG_MMC_SDHCI_MSM_ICE) += sdhci-msm-ice.o
obj-$(CONFIG_MMC_SDHCI_ST) += sdhci-st.o
obj-$(CONFIG_MMC_SDHCI_MICROCHIP_PIC32) += sdhci-pic32.o
obj-$(CONFIG_MMC_SDHCI_BRCMSTB) += sdhci-brcmstb.o

View File

@ -17,6 +17,7 @@
#include <linux/mmc/host.h>
#include <linux/mmc/card.h>
#include "../core/queue.h"
#include "cqhci.h"
#include "sdhci-msm.h"
@ -257,7 +258,6 @@ static void __cqhci_enable(struct cqhci_host *cq_host)
{
struct mmc_host *mmc = cq_host->mmc;
u32 cqcfg;
u32 cqcap = 0;
cqcfg = cqhci_readl(cq_host, CQHCI_CFG);
@ -275,22 +275,6 @@ static void __cqhci_enable(struct cqhci_host *cq_host)
if (cq_host->caps & CQHCI_TASK_DESC_SZ_128)
cqcfg |= CQHCI_TASK_DESC_SZ;
cqcap = cqhci_readl(cq_host, CQHCI_CAP);
if (cqcap & CQHCI_CAP_CS) {
/*
* In case host controller supports cryptographic operations
* then, enable crypro support.
*/
cq_host->caps |= CQHCI_CAP_CRYPTO_SUPPORT;
cqcfg |= CQHCI_ICE_ENABLE;
/*
* For SDHC v5.0 onwards, ICE 3.0 specific registers are added
* in CQ register space, due to which few CQ registers are
* shifted. Set offset_changed boolean to use updated address.
*/
cq_host->offset_changed = true;
}
cqhci_writel(cq_host, cqcfg, CQHCI_CFG);
cqcfg |= CQHCI_ENABLE;
@ -584,16 +568,23 @@ static void cqhci_pm_qos_vote(struct sdhci_host *host, struct mmc_request *mrq)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
brq.mrq);
struct request *req = mmc_queue_req_to_req(mqrq);
sdhci_msm_pm_qos_cpu_vote(host,
msm_host->pdata->pm_qos_data.cmdq_latency, mrq->req->cpu);
msm_host->pdata->pm_qos_data.cmdq_latency, req->cpu);
}
static void cqhci_pm_qos_unvote(struct sdhci_host *host,
struct mmc_request *mrq)
{
struct mmc_queue_req *mqrq = container_of(mrq, struct mmc_queue_req,
brq.mrq);
struct request *req = mmc_queue_req_to_req(mqrq);
/* use async as we're inside an atomic context (soft-irq) */
sdhci_msm_pm_qos_cpu_unvote(host, mrq->req->cpu, true);
sdhci_msm_pm_qos_cpu_unvote(host, req->cpu, true);
}
static void cqhci_post_req(struct mmc_host *host, struct mmc_request *mrq)
@ -617,30 +608,6 @@ static inline int cqhci_tag(struct mmc_request *mrq)
return mrq->cmd ? DCMD_SLOT : mrq->tag;
}
static inline
void cqe_prep_crypto_desc(struct cqhci_host *cq_host, u64 *task_desc,
u64 ice_ctx)
{
u64 *ice_desc = NULL;
if (cq_host->caps & CQHCI_CAP_CRYPTO_SUPPORT) {
/*
* Get the address of ice context for the given task descriptor.
* ice context is present in the upper 64bits of task descriptor
* ice_conext_base_address = task_desc + 8-bytes
*/
ice_desc = (__le64 __force *)((u8 *)task_desc +
CQHCI_TASK_DESC_TASK_PARAMS_SIZE);
memset(ice_desc, 0, CQHCI_TASK_DESC_ICE_PARAMS_SIZE);
/*
* Assign upper 64bits data of task descritor with ice context
*/
if (ice_ctx)
*ice_desc = cpu_to_le64(ice_ctx);
}
}
static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
{
int err = 0;
@ -650,7 +617,6 @@ static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
struct cqhci_host *cq_host = mmc->cqe_private;
unsigned long flags;
struct sdhci_host *host = mmc_priv(mmc);
u64 ice_ctx = 0;
if (!cq_host->enabled) {
pr_err("%s: cqhci: not enabled\n", mmc_hostname(mmc));
@ -675,25 +641,15 @@ static int cqhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
}
if (mrq->data) {
if (cq_host->ops->crypto_cfg) {
err = cq_host->ops->crypto_cfg(mmc, mrq, tag, &ice_ctx);
if (err) {
mmc->err_stats[MMC_ERR_ICE_CFG]++;
pr_err("%s: failed to configure crypto: err %d tag %d\n",
mmc_hostname(mmc), err, tag);
goto out;
}
}
task_desc = (__le64 __force *)get_desc(cq_host, tag);
cqhci_prep_task_desc(mrq, &data, 1);
*task_desc = cpu_to_le64(data);
cqe_prep_crypto_desc(cq_host, task_desc, ice_ctx);
err = cqhci_prep_tran_desc(mrq, cq_host, tag);
if (err) {
pr_err("%s: cqhci: failed to setup tx desc: %d\n",
mmc_hostname(mmc), err);
goto end_crypto;
goto out;
}
/* PM QoS */
sdhci_msm_pm_qos_irq_vote(host);
@ -734,20 +690,6 @@ out_unlock:
if (err)
cqhci_post_req(mmc, mrq);
goto out;
end_crypto:
if (cq_host->ops->crypto_cfg_end && mrq->data) {
err = cq_host->ops->crypto_cfg_end(mmc, mrq);
if (err)
pr_err("%s: failed to end ice config: err %d tag %d\n",
mmc_hostname(mmc), err, tag);
}
if (!(cq_host->caps & CQHCI_CAP_CRYPTO_SUPPORT) &&
cq_host->ops->crypto_cfg_reset && mrq->data)
cq_host->ops->crypto_cfg_reset(mmc, tag);
out:
return err;
}
@ -851,7 +793,7 @@ static void cqhci_finish_mrq(struct mmc_host *mmc, unsigned int tag)
struct cqhci_slot *slot = &cq_host->slot[tag];
struct mmc_request *mrq = slot->mrq;
struct mmc_data *data;
int err = 0, offset = 0;
int offset = 0;
if (cq_host->offset_changed)
offset = CQE_V5_VENDOR_CFG;
@ -873,13 +815,6 @@ static void cqhci_finish_mrq(struct mmc_host *mmc, unsigned int tag)
data = mrq->data;
if (data) {
if (cq_host->ops->crypto_cfg_end) {
err = cq_host->ops->crypto_cfg_end(mmc, mrq);
if (err) {
pr_err("%s: failed to end ice config: err %d tag %d\n",
mmc_hostname(mmc), err, tag);
}
}
if (data->error)
data->bytes_xfered = 0;
else
@ -891,9 +826,6 @@ static void cqhci_finish_mrq(struct mmc_host *mmc, unsigned int tag)
CQHCI_VENDOR_CFG + offset);
}
if (!(cq_host->caps & CQHCI_CAP_CRYPTO_SUPPORT) &&
cq_host->ops->crypto_cfg_reset)
cq_host->ops->crypto_cfg_reset(mmc, tag);
mmc_cqe_request_done(mmc, mrq);
}
@ -1287,14 +1219,6 @@ int cqhci_init(struct cqhci_host *cq_host, struct mmc_host *mmc,
mmc->cqe_qdepth -= 1;
cqcap = cqhci_readl(cq_host, CQHCI_CAP);
if (cqcap & CQHCI_CAP_CS) {
/*
* In case host controller supports cryptographic operations
* then, it uses 128bit task descriptor. Upper 64 bits of task
* descriptor would be used to pass crypto specific informaton.
*/
cq_host->caps |= CQHCI_TASK_DESC_SZ_128;
}
cq_host->slot = devm_kcalloc(mmc_dev(mmc), cq_host->num_slots,
sizeof(*cq_host->slot), GFP_KERNEL);

View File

@ -31,13 +31,11 @@
/* capabilities */
#define CQHCI_CAP 0x04
#define CQHCI_CAP_CS (1 << 28)
/* configuration */
#define CQHCI_CFG 0x08
#define CQHCI_DCMD 0x00001000
#define CQHCI_TASK_DESC_SZ 0x00000100
#define CQHCI_ENABLE 0x00000001
#define CQHCI_ICE_ENABLE 0x00000002
/* control */
#define CQHCI_CTL 0x0C
@ -165,9 +163,6 @@
#define CQHCI_DAT_ADDR_LO(x) (((x) & 0xFFFFFFFF) << 32)
#define CQHCI_DAT_ADDR_HI(x) (((x) & 0xFFFFFFFF) << 0)
#define CQHCI_TASK_DESC_TASK_PARAMS_SIZE 8
#define CQHCI_TASK_DESC_ICE_PARAMS_SIZE 8
struct cqhci_host_ops;
struct mmc_host;
struct cqhci_slot;
@ -190,7 +185,6 @@ struct cqhci_host {
u32 dcmd_slot;
u32 caps;
#define CQHCI_TASK_DESC_SZ_128 0x1
#define CQHCI_CAP_CRYPTO_SUPPORT 0x2
u32 quirks;
#define CQHCI_QUIRK_SHORT_TXFR_DESC_SZ 0x1
@ -235,10 +229,6 @@ struct cqhci_host_ops {
u32 (*read_l)(struct cqhci_host *host, int reg);
void (*enable)(struct mmc_host *mmc);
void (*disable)(struct mmc_host *mmc, bool recovery);
int (*crypto_cfg)(struct mmc_host *mmc, struct mmc_request *mrq,
u32 slot, u64 *ice_ctx);
int (*crypto_cfg_end)(struct mmc_host *mmc, struct mmc_request *mrq);
void (*crypto_cfg_reset)(struct mmc_host *mmc, unsigned int slot);
};
static inline void cqhci_writel(struct cqhci_host *host, u32 val, int reg)

View File

@ -1,581 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2015, 2017-2019, The Linux Foundation. All rights reserved.
*/
#include "sdhci-msm-ice.h"
static void sdhci_msm_ice_error_cb(void *host_ctrl, u32 error)
{
struct sdhci_msm_host *msm_host = (struct sdhci_msm_host *)host_ctrl;
dev_err(&msm_host->pdev->dev, "%s: Error in ice operation 0x%x\n",
__func__, error);
if (msm_host->ice.state == SDHCI_MSM_ICE_STATE_ACTIVE)
msm_host->ice.state = SDHCI_MSM_ICE_STATE_DISABLED;
}
static struct platform_device *sdhci_msm_ice_get_pdevice(struct device *dev)
{
struct device_node *node;
struct platform_device *ice_pdev = NULL;
node = of_parse_phandle(dev->of_node, SDHC_MSM_CRYPTO_LABEL, 0);
if (!node) {
dev_dbg(dev, "%s: sdhc-msm-crypto property not specified\n",
__func__);
goto out;
}
ice_pdev = qcom_ice_get_pdevice(node);
out:
return ice_pdev;
}
static
struct qcom_ice_variant_ops *sdhci_msm_ice_get_vops(struct device *dev)
{
struct qcom_ice_variant_ops *ice_vops = NULL;
struct device_node *node;
node = of_parse_phandle(dev->of_node, SDHC_MSM_CRYPTO_LABEL, 0);
if (!node) {
dev_dbg(dev, "%s: sdhc-msm-crypto property not specified\n",
__func__);
goto out;
}
ice_vops = qcom_ice_get_variant_ops(node);
of_node_put(node);
out:
return ice_vops;
}
static
void sdhci_msm_enable_ice_hci(struct sdhci_host *host, bool enable)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
u32 config = 0;
u32 ice_cap = 0;
/*
* Enable the cryptographic support inside SDHC.
* This is a global config which needs to be enabled
* all the time.
* Only when it it is enabled, the ICE_HCI capability
* will get reflected in CQCAP register.
*/
config = readl_relaxed(host->ioaddr + HC_VENDOR_SPECIFIC_FUNC4);
if (enable)
config &= ~DISABLE_CRYPTO;
else
config |= DISABLE_CRYPTO;
writel_relaxed(config, host->ioaddr + HC_VENDOR_SPECIFIC_FUNC4);
/*
* CQCAP register is in different register space from above
* ice global enable register. So a mb() is required to ensure
* above write gets completed before reading the CQCAP register.
*/
mb();
/*
* Check if ICE HCI capability support is present
* If present, enable it.
*/
ice_cap = readl_relaxed(msm_host->cryptoio + ICE_CQ_CAPABILITIES);
if (ice_cap & ICE_HCI_SUPPORT) {
config = readl_relaxed(msm_host->cryptoio + ICE_CQ_CONFIG);
if (enable)
config |= CRYPTO_GENERAL_ENABLE;
else
config &= ~CRYPTO_GENERAL_ENABLE;
writel_relaxed(config, msm_host->cryptoio + ICE_CQ_CONFIG);
}
}
int sdhci_msm_ice_get_dev(struct sdhci_host *host)
{
struct device *sdhc_dev;
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
if (!msm_host || !msm_host->pdev) {
pr_err("%s: invalid msm_host %p or msm_host->pdev\n",
__func__, msm_host);
return -EINVAL;
}
sdhc_dev = &msm_host->pdev->dev;
msm_host->ice.vops = sdhci_msm_ice_get_vops(sdhc_dev);
msm_host->ice.pdev = sdhci_msm_ice_get_pdevice(sdhc_dev);
if (msm_host->ice.pdev == ERR_PTR(-EPROBE_DEFER)) {
dev_err(sdhc_dev, "%s: ICE device not probed yet\n",
__func__);
msm_host->ice.pdev = NULL;
msm_host->ice.vops = NULL;
return -EPROBE_DEFER;
}
if (!msm_host->ice.pdev) {
dev_dbg(sdhc_dev, "%s: invalid platform device\n", __func__);
msm_host->ice.vops = NULL;
return -ENODEV;
}
if (!msm_host->ice.vops) {
dev_dbg(sdhc_dev, "%s: invalid ice vops\n", __func__);
msm_host->ice.pdev = NULL;
return -ENODEV;
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_DISABLED;
return 0;
}
static
int sdhci_msm_ice_pltfm_init(struct sdhci_msm_host *msm_host)
{
struct resource *ice_memres = NULL;
struct platform_device *pdev = msm_host->pdev;
int err = 0;
if (!msm_host->ice_hci_support)
goto out;
/*
* ICE HCI registers are present in cmdq register space.
* So map the cmdq mem for accessing ICE HCI registers.
*/
ice_memres = platform_get_resource_byname(pdev,
IORESOURCE_MEM, "cqhci_mem");
if (!ice_memres) {
dev_err(&pdev->dev, "Failed to get iomem resource for ice\n");
err = -EINVAL;
goto out;
}
msm_host->cryptoio = devm_ioremap(&pdev->dev,
ice_memres->start,
resource_size(ice_memres));
if (!msm_host->cryptoio) {
dev_err(&pdev->dev, "Failed to remap registers\n");
err = -ENOMEM;
}
out:
return err;
}
int sdhci_msm_ice_init(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.vops->init) {
err = sdhci_msm_ice_pltfm_init(msm_host);
if (err)
goto out;
if (msm_host->ice_hci_support)
sdhci_msm_enable_ice_hci(host, true);
err = msm_host->ice.vops->init(msm_host->ice.pdev,
msm_host,
sdhci_msm_ice_error_cb);
if (err) {
pr_err("%s: ice init err %d\n",
mmc_hostname(host->mmc), err);
sdhci_msm_ice_print_regs(host);
if (msm_host->ice_hci_support)
sdhci_msm_enable_ice_hci(host, false);
goto out;
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_ACTIVE;
}
out:
return err;
}
void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot)
{
writel_relaxed(SDHCI_MSM_ICE_ENABLE_BYPASS,
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n + 16 * slot);
}
static
int sdhci_msm_ice_get_cfg(struct sdhci_msm_host *msm_host, struct request *req,
unsigned int *bypass, short *key_index)
{
int err = 0;
struct ice_data_setting ice_set;
memset(&ice_set, 0, sizeof(struct ice_data_setting));
if (msm_host->ice.vops->config_start) {
err = msm_host->ice.vops->config_start(
msm_host->ice.pdev,
req, &ice_set, false);
if (err) {
pr_err("%s: ice config failed %d\n",
mmc_hostname(msm_host->mmc), err);
return err;
}
}
/* if writing data command */
if (rq_data_dir(req) == WRITE)
*bypass = ice_set.encr_bypass ?
SDHCI_MSM_ICE_ENABLE_BYPASS :
SDHCI_MSM_ICE_DISABLE_BYPASS;
/* if reading data command */
else if (rq_data_dir(req) == READ)
*bypass = ice_set.decr_bypass ?
SDHCI_MSM_ICE_ENABLE_BYPASS :
SDHCI_MSM_ICE_DISABLE_BYPASS;
*key_index = ice_set.crypto_data.key_index;
return err;
}
static
void sdhci_msm_ice_update_cfg(struct sdhci_host *host, u64 lba, u32 slot,
unsigned int bypass, short key_index, u32 cdu_sz)
{
unsigned int ctrl_info_val = 0;
/* Configure ICE index */
ctrl_info_val =
(key_index &
MASK_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX)
<< OFFSET_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX;
/* Configure data unit size of transfer request */
ctrl_info_val |=
(cdu_sz &
MASK_SDHCI_MSM_ICE_CTRL_INFO_CDU)
<< OFFSET_SDHCI_MSM_ICE_CTRL_INFO_CDU;
/* Configure ICE bypass mode */
ctrl_info_val |=
(bypass & MASK_SDHCI_MSM_ICE_CTRL_INFO_BYPASS)
<< OFFSET_SDHCI_MSM_ICE_CTRL_INFO_BYPASS;
writel_relaxed((lba & 0xFFFFFFFF),
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_1_n + 16 * slot);
writel_relaxed(((lba >> 32) & 0xFFFFFFFF),
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_2_n + 16 * slot);
writel_relaxed(ctrl_info_val,
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n + 16 * slot);
/* Ensure ICE registers are configured before issuing SDHCI request */
mb();
}
static inline
void sdhci_msm_ice_hci_update_cqe_cfg(u64 dun, unsigned int bypass,
short key_index, u64 *ice_ctx)
{
/*
*
* registers fields. Below is the equivalent names for
* ICE3.0 Vs ICE2.0:
* Data Unit Number(DUN) == Logical Base address(LBA)
* Crypto Configuration index (CCI) == Key Index
* Crypto Enable (CE) == !BYPASS
*/
if (ice_ctx)
*ice_ctx = DATA_UNIT_NUM(dun) |
CRYPTO_CONFIG_INDEX(key_index) |
CRYPTO_ENABLE(!bypass);
}
static
void sdhci_msm_ice_hci_update_noncq_cfg(struct sdhci_host *host,
u64 dun, unsigned int bypass, short key_index)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
unsigned int crypto_params = 0;
/*
* The naming convention got changed between ICE2.0 and ICE3.0
* registers fields. Below is the equivalent names for
* ICE3.0 Vs ICE2.0:
* Data Unit Number(DUN) == Logical Base address(LBA)
* Crypto Configuration index (CCI) == Key Index
* Crypto Enable (CE) == !BYPASS
*/
/* Configure ICE bypass mode */
crypto_params |=
((!bypass) & MASK_SDHCI_MSM_ICE_HCI_PARAM_CE)
<< OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CE;
/* Configure Crypto Configure Index (CCI) */
crypto_params |= (key_index &
MASK_SDHCI_MSM_ICE_HCI_PARAM_CCI)
<< OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CCI;
writel_relaxed((crypto_params & 0xFFFFFFFF),
msm_host->cryptoio + ICE_NONCQ_CRYPTO_PARAMS);
/* Update DUN */
writel_relaxed((dun & 0xFFFFFFFF),
msm_host->cryptoio + ICE_NONCQ_CRYPTO_DUN);
/* Ensure ICE registers are configured before issuing SDHCI request */
mb();
}
int sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq,
u32 slot)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
short key_index = 0;
u64 dun = 0;
unsigned int bypass = SDHCI_MSM_ICE_ENABLE_BYPASS;
u32 cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_512_B;
struct request *req;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
WARN_ON(!mrq);
if (!mrq)
return -EINVAL;
req = mrq->req;
if (req && req->bio) {
#ifdef CONFIG_PFK
if (bio_dun(req->bio)) {
dun = bio_dun(req->bio);
cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB;
} else {
dun = req->__sector;
}
#else
dun = req->__sector;
#endif
err = sdhci_msm_ice_get_cfg(msm_host, req, &bypass, &key_index);
if (err)
return err;
pr_debug("%s: %s: slot %d bypass %d key_index %d\n",
mmc_hostname(host->mmc),
(rq_data_dir(req) == WRITE) ? "WRITE" : "READ",
slot, bypass, key_index);
}
if (msm_host->ice_hci_support) {
/* For ICE HCI / ICE3.0 */
sdhci_msm_ice_hci_update_noncq_cfg(host, dun, bypass,
key_index);
} else {
/* For ICE versions earlier to ICE3.0 */
sdhci_msm_ice_update_cfg(host, dun, slot, bypass, key_index,
cdu_sz);
}
return 0;
}
int sdhci_msm_ice_cqe_cfg(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
short key_index = 0;
u64 dun = 0;
unsigned int bypass = SDHCI_MSM_ICE_ENABLE_BYPASS;
struct request *req;
u32 cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_512_B;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
WARN_ON(!mrq);
if (!mrq)
return -EINVAL;
req = mrq->req;
if (req && req->bio) {
#ifdef CONFIG_PFK
if (bio_dun(req->bio)) {
dun = bio_dun(req->bio);
cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB;
} else {
dun = req->__sector;
}
#else
dun = req->__sector;
#endif
err = sdhci_msm_ice_get_cfg(msm_host, req, &bypass, &key_index);
if (err)
return err;
pr_debug("%s: %s: slot %d bypass %d key_index %d\n",
mmc_hostname(host->mmc),
(rq_data_dir(req) == WRITE) ? "WRITE" : "READ",
slot, bypass, key_index);
}
if (msm_host->ice_hci_support) {
/* For ICE HCI / ICE3.0 */
sdhci_msm_ice_hci_update_cqe_cfg(dun, bypass, key_index,
ice_ctx);
} else {
/* For ICE versions earlier to ICE3.0 */
sdhci_msm_ice_update_cfg(host, dun, slot, bypass, key_index,
cdu_sz);
}
return 0;
}
int sdhci_msm_ice_cfg_end(struct sdhci_host *host, struct mmc_request *mrq)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
struct request *req;
if (!host->is_crypto_en)
return 0;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
req = mrq->req;
if (req) {
if (msm_host->ice.vops->config_end) {
err = msm_host->ice.vops->config_end(
msm_host->ice.pdev, req);
if (err) {
pr_err("%s: ice config end failed %d\n",
mmc_hostname(host->mmc), err);
return err;
}
}
}
return 0;
}
int sdhci_msm_ice_reset(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state before reset %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->reset) {
err = msm_host->ice.vops->reset(msm_host->ice.pdev);
if (err) {
pr_err("%s: ice reset failed %d\n",
mmc_hostname(host->mmc), err);
sdhci_msm_ice_print_regs(host);
return err;
}
}
/* If ICE HCI support is present then re-enable it */
if (msm_host->ice_hci_support)
sdhci_msm_enable_ice_hci(host, true);
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state after reset %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
return 0;
}
int sdhci_msm_ice_resume(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.state !=
SDHCI_MSM_ICE_STATE_SUSPENDED) {
pr_err("%s: ice is in invalid state before resume %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->resume) {
err = msm_host->ice.vops->resume(msm_host->ice.pdev);
if (err) {
pr_err("%s: ice resume failed %d\n",
mmc_hostname(host->mmc), err);
return err;
}
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_ACTIVE;
return 0;
}
int sdhci_msm_ice_suspend(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.state !=
SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state before resume %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->suspend) {
err = msm_host->ice.vops->suspend(msm_host->ice.pdev);
if (err) {
pr_err("%s: ice suspend failed %d\n",
mmc_hostname(host->mmc), err);
return -EINVAL;
}
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_SUSPENDED;
return 0;
}
int sdhci_msm_ice_get_status(struct sdhci_host *host, int *ice_status)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int stat = -EINVAL;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->status) {
*ice_status = 0;
stat = msm_host->ice.vops->status(msm_host->ice.pdev);
if (stat < 0) {
pr_err("%s: ice get sts failed %d\n",
mmc_hostname(host->mmc), stat);
return -EINVAL;
}
*ice_status = stat;
}
return 0;
}
void sdhci_msm_ice_print_regs(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
if (msm_host->ice.vops->debug)
msm_host->ice.vops->debug(msm_host->ice.pdev);
}

View File

@ -1,164 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2015, 2017, 2019, The Linux Foundation. All rights reserved.
*/
#ifndef __SDHCI_MSM_ICE_H__
#define __SDHCI_MSM_ICE_H__
#include <linux/io.h>
#include <linux/of.h>
#include <linux/blkdev.h>
#include <crypto/ice.h>
#include "sdhci-msm.h"
#define SDHC_MSM_CRYPTO_LABEL "sdhc-msm-crypto"
/* Timeout waiting for ICE initialization, that requires TZ access */
#define SDHCI_MSM_ICE_COMPLETION_TIMEOUT_MS 500
/*
* SDHCI host controller ICE registers. There are n [0..31]
* of each of these registers
*/
#define NUM_SDHCI_MSM_ICE_CTRL_INFO_n_REGS 32
#define CORE_VENDOR_SPEC_ICE_CTRL 0x300
#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_1_n 0x304
#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_2_n 0x308
#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n 0x30C
/* ICE3.0 register which got added cmdq reg space */
#define ICE_CQ_CAPABILITIES 0x04
#define ICE_HCI_SUPPORT (1 << 28)
#define ICE_CQ_CONFIG 0x08
#define CRYPTO_GENERAL_ENABLE (1 << 1)
#define ICE_NONCQ_CRYPTO_PARAMS 0x70
#define ICE_NONCQ_CRYPTO_DUN 0x74
/* ICE3.0 register which got added hc reg space */
#define HC_VENDOR_SPECIFIC_FUNC4 0x260
#define DISABLE_CRYPTO (1 << 15)
#define HC_VENDOR_SPECIFIC_ICE_CTRL 0x800
#define ICE_SW_RST_EN (1 << 0)
/* SDHCI MSM ICE CTRL Info register offset */
enum {
OFFSET_SDHCI_MSM_ICE_CTRL_INFO_BYPASS = 0,
OFFSET_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX = 1,
OFFSET_SDHCI_MSM_ICE_CTRL_INFO_CDU = 6,
OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CCI = 0,
OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CE = 8,
};
/* SDHCI MSM ICE CTRL Info register masks */
enum {
MASK_SDHCI_MSM_ICE_CTRL_INFO_BYPASS = 0x1,
MASK_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX = 0x1F,
MASK_SDHCI_MSM_ICE_CTRL_INFO_CDU = 0x7,
MASK_SDHCI_MSM_ICE_HCI_PARAM_CE = 0x1,
MASK_SDHCI_MSM_ICE_HCI_PARAM_CCI = 0xff
};
/* SDHCI MSM ICE encryption/decryption bypass state */
enum {
SDHCI_MSM_ICE_DISABLE_BYPASS = 0,
SDHCI_MSM_ICE_ENABLE_BYPASS = 1,
};
/* SDHCI MSM ICE Crypto Data Unit of target DUN of Transfer Request */
enum {
SDHCI_MSM_ICE_TR_DATA_UNIT_512_B = 0,
SDHCI_MSM_ICE_TR_DATA_UNIT_1_KB = 1,
SDHCI_MSM_ICE_TR_DATA_UNIT_2_KB = 2,
SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB = 3,
SDHCI_MSM_ICE_TR_DATA_UNIT_8_KB = 4,
SDHCI_MSM_ICE_TR_DATA_UNIT_16_KB = 5,
SDHCI_MSM_ICE_TR_DATA_UNIT_32_KB = 6,
SDHCI_MSM_ICE_TR_DATA_UNIT_64_KB = 7,
};
/* SDHCI MSM ICE internal state */
enum {
SDHCI_MSM_ICE_STATE_DISABLED = 0,
SDHCI_MSM_ICE_STATE_ACTIVE = 1,
SDHCI_MSM_ICE_STATE_SUSPENDED = 2,
};
/* crypto context fields in cmdq data command task descriptor */
#define DATA_UNIT_NUM(x) (((u64)(x) & 0xFFFFFFFF) << 0)
#define CRYPTO_CONFIG_INDEX(x) (((u64)(x) & 0xFF) << 32)
#define CRYPTO_ENABLE(x) (((u64)(x) & 0x1) << 47)
#ifdef CONFIG_MMC_SDHCI_MSM_ICE
int sdhci_msm_ice_get_dev(struct sdhci_host *host);
int sdhci_msm_ice_init(struct sdhci_host *host);
void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot);
int sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq,
u32 slot);
int sdhci_msm_ice_cqe_cfg(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot, u64 *ice_ctx);
int sdhci_msm_ice_cfg_end(struct sdhci_host *host, struct mmc_request *mrq);
int sdhci_msm_ice_reset(struct sdhci_host *host);
int sdhci_msm_ice_resume(struct sdhci_host *host);
int sdhci_msm_ice_suspend(struct sdhci_host *host);
int sdhci_msm_ice_get_status(struct sdhci_host *host, int *ice_status);
void sdhci_msm_ice_print_regs(struct sdhci_host *host);
#else
inline int sdhci_msm_ice_get_dev(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
if (msm_host) {
msm_host->ice.pdev = NULL;
msm_host->ice.vops = NULL;
}
return -ENODEV;
}
inline int sdhci_msm_ice_init(struct sdhci_host *host)
{
return 0;
}
inline void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot)
{
}
inline int sdhci_msm_ice_cfg(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot)
{
return 0;
}
inline int sdhci_msm_ice_cqe_cfg(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
{
return 0;
}
inline int sdhci_msm_ice_cfg_end(struct sdhci_host *host,
struct mmc_request *mrq)
{
return 0;
}
inline int sdhci_msm_ice_reset(struct sdhci_host *host)
{
return 0;
}
inline int sdhci_msm_ice_resume(struct sdhci_host *host)
{
return 0;
}
inline int sdhci_msm_ice_suspend(struct sdhci_host *host)
{
return 0;
}
inline int sdhci_msm_ice_get_status(struct sdhci_host *host,
int *ice_status)
{
return 0;
}
inline void sdhci_msm_ice_print_regs(struct sdhci_host *host)
{
}
#endif /* CONFIG_MMC_SDHCI_MSM_ICE */
#endif /* __SDHCI_MSM_ICE_H__ */

View File

@ -34,7 +34,6 @@
#include <linux/clk/qcom.h>
#include "sdhci-msm.h"
#include "sdhci-msm-ice.h"
#include "sdhci-pltfm.h"
#include "cqhci.h"
@ -2168,26 +2167,20 @@ struct sdhci_msm_pltfm_data *sdhci_msm_populate_pdata(struct device *dev,
}
}
if (msm_host->ice.pdev) {
if (sdhci_msm_dt_get_array(dev, "qcom,ice-clk-rates",
&ice_clk_table, &ice_clk_table_len, 0)) {
dev_err(dev, "failed parsing supported ice clock rates\n");
goto out;
}
if (!ice_clk_table || !ice_clk_table_len) {
dev_err(dev, "Invalid clock table\n");
goto out;
}
if (ice_clk_table_len != 2) {
dev_err(dev, "Need max and min frequencies in the table\n");
goto out;
}
pdata->sup_ice_clk_table = ice_clk_table;
pdata->sup_ice_clk_cnt = ice_clk_table_len;
pdata->ice_clk_max = pdata->sup_ice_clk_table[0];
pdata->ice_clk_min = pdata->sup_ice_clk_table[1];
dev_dbg(dev, "supported ICE clock rates (Hz): max: %u min: %u\n",
if (!sdhci_msm_dt_get_array(dev, "qcom,ice-clk-rates",
&ice_clk_table, &ice_clk_table_len, 0)) {
if (ice_clk_table && ice_clk_table_len) {
if (ice_clk_table_len != 2) {
dev_err(dev, "Need max and min frequencies\n");
goto out;
}
pdata->sup_ice_clk_table = ice_clk_table;
pdata->sup_ice_clk_cnt = ice_clk_table_len;
pdata->ice_clk_max = pdata->sup_ice_clk_table[0];
pdata->ice_clk_min = pdata->sup_ice_clk_table[1];
dev_dbg(dev, "ICE clock rates (Hz): max: %u min: %u\n",
pdata->ice_clk_max, pdata->ice_clk_min);
}
}
if (sdhci_msm_dt_get_array(dev, "qcom,devfreq,freq-table",
@ -2409,64 +2402,6 @@ void sdhci_msm_cqe_disable(struct mmc_host *mmc, bool recovery)
sdhci_cqe_disable(mmc, recovery);
}
int sdhci_msm_cqe_crypto_cfg(struct mmc_host *mmc,
struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
{
int err = 0;
struct sdhci_host *host = mmc_priv(mmc);
if (!host->is_crypto_en)
return 0;
if (host->mmc->inlinecrypt_reset_needed &&
host->ops->crypto_engine_reset) {
err = host->ops->crypto_engine_reset(host);
if (err) {
pr_err("%s: crypto reset failed\n",
mmc_hostname(host->mmc));
goto out;
}
host->mmc->inlinecrypt_reset_needed = false;
}
err = sdhci_msm_ice_cqe_cfg(host, mrq, slot, ice_ctx);
if (err) {
pr_err("%s: failed to configure crypto\n",
mmc_hostname(host->mmc));
goto out;
}
out:
return err;
}
void sdhci_msm_cqe_crypto_cfg_reset(struct mmc_host *mmc, unsigned int slot)
{
struct sdhci_host *host = mmc_priv(mmc);
if (!host->is_crypto_en)
return;
return sdhci_msm_ice_cfg_reset(host, slot);
}
int sdhci_msm_cqe_crypto_cfg_end(struct mmc_host *mmc,
struct mmc_request *mrq)
{
int err = 0;
struct sdhci_host *host = mmc_priv(mmc);
if (!host->is_crypto_en)
return 0;
err = sdhci_msm_ice_cfg_end(host, mrq);
if (err) {
pr_err("%s: failed to configure crypto\n",
mmc_hostname(host->mmc));
return err;
}
return 0;
}
void sdhci_msm_cqe_sdhci_dumpregs(struct mmc_host *mmc)
{
struct sdhci_host *host = mmc_priv(mmc);
@ -2477,9 +2412,6 @@ void sdhci_msm_cqe_sdhci_dumpregs(struct mmc_host *mmc)
static const struct cqhci_host_ops sdhci_msm_cqhci_ops = {
.enable = sdhci_msm_cqe_enable,
.disable = sdhci_msm_cqe_disable,
.crypto_cfg = sdhci_msm_cqe_crypto_cfg,
.crypto_cfg_reset = sdhci_msm_cqe_crypto_cfg_reset,
.crypto_cfg_end = sdhci_msm_cqe_crypto_cfg_end,
.dumpregs = sdhci_msm_cqe_sdhci_dumpregs,
};
@ -4179,7 +4111,6 @@ void sdhci_msm_dump_vendor_regs(struct sdhci_host *host)
int i, index = 0;
u32 test_bus_val = 0;
u32 debug_reg[MAX_TEST_BUS] = {0};
u32 sts = 0;
sdhci_msm_cache_debug_data(host);
pr_info("----------- VENDOR REGISTER DUMP -----------\n");
@ -4260,29 +4191,10 @@ void sdhci_msm_dump_vendor_regs(struct sdhci_host *host)
pr_info(" Test bus[%d to %d]: 0x%08x 0x%08x 0x%08x 0x%08x\n",
i, i + 3, debug_reg[i], debug_reg[i+1],
debug_reg[i+2], debug_reg[i+3]);
if (host->is_crypto_en) {
sdhci_msm_ice_get_status(host, &sts);
pr_info("%s: ICE status %x\n", mmc_hostname(host->mmc), sts);
sdhci_msm_ice_print_regs(host);
}
}
static void sdhci_msm_reset(struct sdhci_host *host, u8 mask)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
/* Set ICE core to be reset in sync with SDHC core */
if (msm_host->ice.pdev) {
if (msm_host->ice_hci_support)
writel_relaxed(1, host->ioaddr +
HC_VENDOR_SPECIFIC_ICE_CTRL);
else
writel_relaxed(1,
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL);
}
sdhci_reset(host, mask);
if ((host->mmc->caps2 & MMC_CAP2_CQE) && (mask & SDHCI_RESET_ALL))
cqhci_suspend(host->mmc);
@ -4974,9 +4886,6 @@ out:
}
static struct sdhci_ops sdhci_msm_ops = {
.crypto_engine_cfg = sdhci_msm_ice_cfg,
.crypto_engine_cfg_end = sdhci_msm_ice_cfg_end,
.crypto_engine_reset = sdhci_msm_ice_reset,
.set_uhs_signaling = sdhci_msm_set_uhs_signaling,
.check_power_status = sdhci_msm_check_power_status,
.platform_execute_tuning = sdhci_msm_execute_tuning,
@ -5108,7 +5017,6 @@ static void sdhci_set_default_hw_caps(struct sdhci_msm_host *msm_host,
if ((major == 1) && (minor >= 0x6b)) {
host->cdr_support = true;
msm_host->ice_hci_support = true;
}
/* 7FF projects with 7nm DLL */
@ -5144,84 +5052,23 @@ static int sdhci_msm_setup_ice_clk(struct sdhci_msm_host *msm_host,
{
int ret = 0;
if (msm_host->ice.pdev) {
/* Setup SDC ICE clock */
msm_host->ice_clk = devm_clk_get(&pdev->dev, "ice_core_clk");
if (!IS_ERR(msm_host->ice_clk)) {
/* ICE core has only one clock frequency for now */
ret = clk_set_rate(msm_host->ice_clk,
msm_host->pdata->ice_clk_max);
if (ret) {
dev_err(&pdev->dev, "ICE_CLK rate set failed (%d) for %u\n",
ret,
msm_host->pdata->ice_clk_max);
return ret;
}
ret = clk_prepare_enable(msm_host->ice_clk);
if (ret)
return ret;
ret = clk_set_flags(msm_host->ice_clk,
CLKFLAG_RETAIN_MEM);
if (ret)
dev_err(&pdev->dev, "ICE_CLK set RETAIN_MEM failed: %d\n",
ret);
msm_host->ice_clk_rate =
msm_host->pdata->ice_clk_max;
}
}
return ret;
}
static int sdhci_msm_initialize_ice(struct sdhci_msm_host *msm_host,
struct platform_device *pdev,
struct sdhci_host *host)
{
int ret = 0;
if (msm_host->ice.pdev) {
ret = sdhci_msm_ice_init(host);
/* Setup SDC ICE clock */
msm_host->ice_clk = devm_clk_get(&pdev->dev, "ice_core_clk");
if (!IS_ERR(msm_host->ice_clk)) {
/* ICE core has only one clock frequency for now */
ret = clk_set_rate(msm_host->ice_clk,
msm_host->pdata->ice_clk_max);
if (ret) {
dev_err(&pdev->dev, "%s: SDHCi ICE init failed (%d)\n",
mmc_hostname(host->mmc), ret);
return -EINVAL;
dev_err(&pdev->dev, "ICE_CLK rate set failed (%d) for %u\n",
ret,
msm_host->pdata->ice_clk_max);
return ret;
}
host->is_crypto_en = true;
msm_host->mmc->inlinecrypt_support = true;
/* Packed commands cannot be encrypted/decrypted using ICE */
msm_host->mmc->caps2 &= ~(MMC_CAP2_PACKED_WR |
MMC_CAP2_PACKED_WR_CONTROL);
}
return 0;
}
static int sdhci_msm_get_ice_device_vops(struct sdhci_host *host,
struct platform_device *pdev)
{
int ret = 0;
ret = sdhci_msm_ice_get_dev(host);
if (ret == -EPROBE_DEFER) {
/*
* SDHCI driver might be probed before ICE driver does.
* In that case we would like to return EPROBE_DEFER code
* in order to delay its probing.
*/
dev_err(&pdev->dev, "%s: required ICE device not probed yet err = %d\n",
__func__, ret);
} else if (ret == -ENODEV) {
/*
* ICE device is not enabled in DTS file. No need for further
* initialization of ICE driver.
*/
dev_warn(&pdev->dev, "%s: ICE device is not enabled\n",
__func__);
ret = 0;
} else if (ret) {
dev_err(&pdev->dev, "%s: sdhci_msm_ice_get_dev failed %d\n",
__func__, ret);
ret = clk_prepare_enable(msm_host->ice_clk);
if (ret)
return ret;
msm_host->ice_clk_rate =
msm_host->pdata->ice_clk_max;
}
return ret;
@ -5311,11 +5158,6 @@ static int sdhci_msm_probe(struct platform_device *pdev)
msm_host->mmc = host->mmc;
msm_host->pdev = pdev;
/* get the ice device vops if present */
ret = sdhci_msm_get_ice_device_vops(host, pdev);
if (ret)
goto out_host_free;
/* Extract platform data */
if (pdev->dev.of_node) {
ret = of_alias_get_id(pdev->dev.of_node, "sdhc");
@ -5653,11 +5495,6 @@ static int sdhci_msm_probe(struct platform_device *pdev)
if (msm_host->pdata->nonhotplug)
msm_host->mmc->caps2 |= MMC_CAP2_NONHOTPLUG;
/* Initialize ICE if present */
ret = sdhci_msm_initialize_ice(msm_host, pdev, host);
if (ret == -EINVAL)
goto vreg_deinit;
init_completion(&msm_host->pwr_irq_completion);
if (gpio_is_valid(msm_host->pdata->status_gpio)) {
@ -5939,7 +5776,6 @@ static int sdhci_msm_runtime_suspend(struct device *dev)
struct sdhci_host *host = dev_get_drvdata(dev);
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int ret;
ktime_t start = ktime_get();
if (host->mmc->card && mmc_card_sdio(host->mmc->card))
@ -5950,12 +5786,6 @@ static int sdhci_msm_runtime_suspend(struct device *dev)
defer_disable_host_irq:
disable_irq(msm_host->pwr_irq);
if (host->is_crypto_en) {
ret = sdhci_msm_ice_suspend(host);
if (ret < 0)
pr_err("%s: failed to suspend crypto engine %d\n",
mmc_hostname(host->mmc), ret);
}
sdhci_msm_disable_controller_clock(host);
trace_sdhci_msm_runtime_suspend(mmc_hostname(host->mmc), 0,
ktime_to_us(ktime_sub(ktime_get(), start)));
@ -5974,20 +5804,11 @@ static int sdhci_msm_runtime_resume(struct device *dev)
if (ret) {
pr_err("%s: Failed to enable reqd clocks\n",
mmc_hostname(host->mmc));
goto skip_ice_resume;
}
if (host->mmc->ios.timing == MMC_TIMING_MMC_HS400)
sdhci_msm_toggle_fifo_write_clk(host);
if (host->is_crypto_en) {
ret = sdhci_msm_ice_resume(host);
if (ret)
pr_err("%s: failed to resume crypto engine %d\n",
mmc_hostname(host->mmc), ret);
}
skip_ice_resume:
if (host->mmc->card && mmc_card_sdio(host->mmc->card))
goto defer_enable_host_irq;

View File

@ -266,17 +266,9 @@ struct sdhci_msm_debug_data {
struct sdhci_host copy_host;
};
struct sdhci_msm_ice_data {
struct qcom_ice_variant_ops *vops;
struct platform_device *pdev;
int state;
};
struct sdhci_msm_host {
struct platform_device *pdev;
void __iomem *core_mem; /* MSM SDCC mapped address */
void __iomem *cryptoio; /* ICE HCI mapped address */
bool ice_hci_support;
int pwr_irq; /* power irq */
struct clk *clk; /* main SD/MMC bus clock */
struct clk *pclk; /* SDHC peripheral bus clock */
@ -327,7 +319,6 @@ struct sdhci_msm_host {
int soc_min_rev;
struct workqueue_struct *pm_qos_wq;
struct sdhci_msm_dll_hsr *dll_hsr;
struct sdhci_msm_ice_data ice;
u32 ice_clk_rate;
bool debug_mode_enabled;
bool reg_store;

View File

@ -1922,50 +1922,6 @@ static int sdhci_get_tuning_cmd(struct sdhci_host *host)
return MMC_SEND_TUNING_BLOCK;
}
static int sdhci_crypto_cfg(struct sdhci_host *host, struct mmc_request *mrq,
u32 slot)
{
int err = 0;
if (host->mmc->inlinecrypt_reset_needed &&
host->ops->crypto_engine_reset) {
err = host->ops->crypto_engine_reset(host);
if (err) {
pr_err("%s: crypto reset failed\n",
mmc_hostname(host->mmc));
goto out;
}
host->mmc->inlinecrypt_reset_needed = false;
}
if (host->ops->crypto_engine_cfg) {
err = host->ops->crypto_engine_cfg(host, mrq, slot);
if (err) {
pr_err("%s: failed to configure crypto\n",
mmc_hostname(host->mmc));
goto out;
}
}
out:
return err;
}
static int sdhci_crypto_cfg_end(struct sdhci_host *host,
struct mmc_request *mrq)
{
int err = 0;
if (host->ops->crypto_engine_cfg_end) {
err = host->ops->crypto_engine_cfg_end(host, mrq);
if (err) {
pr_err("%s: failed to configure crypto\n",
mmc_hostname(host->mmc));
return err;
}
}
return 0;
}
static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
{
struct sdhci_host *host;
@ -2032,13 +1988,6 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
sdhci_get_tuning_cmd(host));
}
if (host->is_crypto_en) {
spin_unlock_irqrestore(&host->lock, flags);
if (sdhci_crypto_cfg(host, mrq, 0))
goto end_req;
spin_lock_irqsave(&host->lock, flags);
}
if (mrq->sbc && !(host->flags & SDHCI_AUTO_CMD23))
sdhci_send_command(host, mrq->sbc);
else
@ -2048,13 +1997,6 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
mmiowb();
spin_unlock_irqrestore(&host->lock, flags);
return;
end_req:
mrq->cmd->error = -EIO;
if (mrq->data)
mrq->data->error = -EIO;
host->mrq = NULL;
sdhci_dumpregs(host);
mmc_request_done(host->mmc, mrq);
}
void sdhci_set_bus_width(struct sdhci_host *host, int width)
@ -3121,7 +3063,6 @@ static bool sdhci_request_done(struct sdhci_host *host)
mmiowb();
spin_unlock_irqrestore(&host->lock, flags);
sdhci_crypto_cfg_end(host, mrq);
mmc_request_done(host->mmc, mrq);
return false;

View File

@ -671,7 +671,6 @@ struct sdhci_host {
enum sdhci_power_policy power_policy;
bool sdio_irq_async_status;
bool is_crypto_en;
u32 auto_cmd_err_sts;
struct ratelimit_state dbg_dump_rs;
@ -712,11 +711,6 @@ struct sdhci_ops {
unsigned int (*get_ro)(struct sdhci_host *host);
void (*reset)(struct sdhci_host *host, u8 mask);
int (*platform_execute_tuning)(struct sdhci_host *host, u32 opcode);
int (*crypto_engine_cfg)(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot);
int (*crypto_engine_cfg_end)(struct sdhci_host *host,
struct mmc_request *mrq);
int (*crypto_engine_reset)(struct sdhci_host *host);
void (*set_uhs_signaling)(struct sdhci_host *host, unsigned int uhs);
void (*hw_reset)(struct sdhci_host *host);
void (*adma_workaround)(struct sdhci_host *host, u32 intmask);

View File

@ -2261,8 +2261,6 @@ void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q)
if (!shost->use_clustering)
q->limits.cluster = 0;
if (shost->inlinecrypt_support)
queue_flag_set_unlocked(QUEUE_FLAG_INLINECRYPT, q);
/*
* Set a reasonable default alignment: The larger of 32-byte (dword),
* which is a common minimum for HBAs, and the minimum DMA alignment,

View File

@ -101,18 +101,6 @@ config SCSI_UFS_QCOM
Select this if you have UFS controller on QCOM chipset.
If unsure, say N.
config SCSI_UFS_QCOM_ICE
bool "QCOM specific hooks to Inline Crypto Engine for UFS driver"
depends on SCSI_UFS_QCOM && CRYPTO_DEV_QCOM_ICE
help
This selects the QCOM specific additions to support Inline Crypto
Engine (ICE).
ICE accelerates the crypto operations and maintains the high UFS
performance.
Select this if you have ICE supported for UFS on QCOM chipset.
If unsure, say N.
config SCSI_UFS_TEST
tristate "Universal Flash Storage host controller driver unit-tests"
depends on SCSI_UFSHCD && IOSCHED_TEST

View File

@ -1,782 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/io.h>
#include <linux/of.h>
#include <linux/blkdev.h>
#include <linux/spinlock.h>
#include <crypto/ice.h>
#include "ufshcd.h"
#include "ufs-qcom-ice.h"
#include "ufs-qcom-debugfs.h"
#define UFS_QCOM_CRYPTO_LABEL "ufs-qcom-crypto"
/* Timeout waiting for ICE initialization, that requires TZ access */
#define UFS_QCOM_ICE_COMPLETION_TIMEOUT_MS 500
#define UFS_QCOM_ICE_DEFAULT_DBG_PRINT_EN 0
static struct workqueue_struct *ice_workqueue;
static void ufs_qcom_ice_dump_regs(struct ufs_qcom_host *qcom_host, int offset,
int len, char *prefix)
{
print_hex_dump(KERN_ERR, prefix,
len > 4 ? DUMP_PREFIX_OFFSET : DUMP_PREFIX_NONE,
16, 4, qcom_host->hba->mmio_base + offset, len * 4,
false);
}
void ufs_qcom_ice_print_regs(struct ufs_qcom_host *qcom_host)
{
int i;
if (!(qcom_host->dbg_print_en & UFS_QCOM_DBG_PRINT_ICE_REGS_EN))
return;
ufs_qcom_ice_dump_regs(qcom_host, REG_UFS_QCOM_ICE_CFG, 1,
"REG_UFS_QCOM_ICE_CFG ");
for (i = 0; i < NUM_QCOM_ICE_CTRL_INFO_n_REGS; i++) {
pr_err("REG_UFS_QCOM_ICE_CTRL_INFO_1_%d = 0x%08X\n", i,
ufshcd_readl(qcom_host->hba,
(REG_UFS_QCOM_ICE_CTRL_INFO_1_n + 8 * i)));
pr_err("REG_UFS_QCOM_ICE_CTRL_INFO_2_%d = 0x%08X\n", i,
ufshcd_readl(qcom_host->hba,
(REG_UFS_QCOM_ICE_CTRL_INFO_2_n + 8 * i)));
}
if (qcom_host->ice.pdev && qcom_host->ice.vops &&
qcom_host->ice.vops->debug)
qcom_host->ice.vops->debug(qcom_host->ice.pdev);
}
static void ufs_qcom_ice_error_cb(void *host_ctrl, u32 error)
{
struct ufs_qcom_host *qcom_host = (struct ufs_qcom_host *)host_ctrl;
dev_err(qcom_host->hba->dev, "%s: Error in ice operation 0x%x\n",
__func__, error);
if (qcom_host->ice.state == UFS_QCOM_ICE_STATE_ACTIVE)
qcom_host->ice.state = UFS_QCOM_ICE_STATE_DISABLED;
}
static struct platform_device *ufs_qcom_ice_get_pdevice(struct device *ufs_dev)
{
struct device_node *node;
struct platform_device *ice_pdev = NULL;
node = of_parse_phandle(ufs_dev->of_node, UFS_QCOM_CRYPTO_LABEL, 0);
if (!node) {
dev_err(ufs_dev, "%s: ufs-qcom-crypto property not specified\n",
__func__);
goto out;
}
ice_pdev = qcom_ice_get_pdevice(node);
out:
return ice_pdev;
}
static
struct qcom_ice_variant_ops *ufs_qcom_ice_get_vops(struct device *ufs_dev)
{
struct qcom_ice_variant_ops *ice_vops = NULL;
struct device_node *node;
node = of_parse_phandle(ufs_dev->of_node, UFS_QCOM_CRYPTO_LABEL, 0);
if (!node) {
dev_err(ufs_dev, "%s: ufs-qcom-crypto property not specified\n",
__func__);
goto out;
}
ice_vops = qcom_ice_get_variant_ops(node);
if (!ice_vops)
dev_err(ufs_dev, "%s: invalid ice_vops\n", __func__);
of_node_put(node);
out:
return ice_vops;
}
/**
* ufs_qcom_ice_get_dev() - sets pointers to ICE data structs in UFS QCom host
* @qcom_host: Pointer to a UFS QCom internal host structure.
*
* Sets ICE platform device pointer and ICE vops structure
* corresponding to the current UFS device.
*
* Return: -EINVAL in-case of invalid input parameters:
* qcom_host, qcom_host->hba or qcom_host->hba->dev
* -ENODEV in-case ICE device is not required
* -EPROBE_DEFER in-case ICE is required and hasn't been probed yet
* 0 otherwise
*/
int ufs_qcom_ice_get_dev(struct ufs_qcom_host *qcom_host)
{
struct device *ufs_dev;
int err = 0;
if (!qcom_host || !qcom_host->hba || !qcom_host->hba->dev) {
pr_err("%s: invalid qcom_host %p or qcom_host->hba or qcom_host->hba->dev\n",
__func__, qcom_host);
err = -EINVAL;
goto out;
}
ufs_dev = qcom_host->hba->dev;
qcom_host->ice.vops = ufs_qcom_ice_get_vops(ufs_dev);
qcom_host->ice.pdev = ufs_qcom_ice_get_pdevice(ufs_dev);
if (qcom_host->ice.pdev == ERR_PTR(-EPROBE_DEFER)) {
dev_err(ufs_dev, "%s: ICE device not probed yet\n",
__func__);
qcom_host->ice.pdev = NULL;
qcom_host->ice.vops = NULL;
err = -EPROBE_DEFER;
goto out;
}
if (!qcom_host->ice.pdev || !qcom_host->ice.vops) {
dev_err(ufs_dev, "%s: invalid platform device %p or vops %p\n",
__func__, qcom_host->ice.pdev, qcom_host->ice.vops);
qcom_host->ice.pdev = NULL;
qcom_host->ice.vops = NULL;
err = -ENODEV;
goto out;
}
qcom_host->ice.state = UFS_QCOM_ICE_STATE_DISABLED;
out:
return err;
}
static void ufs_qcom_ice_cfg_work(struct work_struct *work)
{
unsigned long flags;
struct ufs_qcom_host *qcom_host =
container_of(work, struct ufs_qcom_host, ice_cfg_work);
if (!qcom_host->ice.vops->config_start)
return;
spin_lock_irqsave(&qcom_host->ice_work_lock, flags);
if (!qcom_host->req_pending ||
ufshcd_is_shutdown_ongoing(qcom_host->hba)) {
qcom_host->work_pending = false;
spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
return;
}
spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
/*
* config_start is called again as previous attempt returned -EAGAIN,
* this call shall now take care of the necessary key setup.
*/
qcom_host->ice.vops->config_start(qcom_host->ice.pdev,
qcom_host->req_pending, NULL, false);
spin_lock_irqsave(&qcom_host->ice_work_lock, flags);
qcom_host->req_pending = NULL;
qcom_host->work_pending = false;
spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
}
/**
* ufs_qcom_ice_init() - initializes the ICE-UFS interface and ICE device
* @qcom_host: Pointer to a UFS QCom internal host structure.
* qcom_host, qcom_host->hba and qcom_host->hba->dev should all
* be valid pointers.
*
* Return: -EINVAL in-case of an error
* 0 otherwise
*/
int ufs_qcom_ice_init(struct ufs_qcom_host *qcom_host)
{
struct device *ufs_dev = qcom_host->hba->dev;
int err;
err = qcom_host->ice.vops->init(qcom_host->ice.pdev,
qcom_host,
ufs_qcom_ice_error_cb);
if (err) {
dev_err(ufs_dev, "%s: ice init failed. err = %d\n",
__func__, err);
goto out;
} else {
qcom_host->ice.state = UFS_QCOM_ICE_STATE_ACTIVE;
}
qcom_host->dbg_print_en |= UFS_QCOM_ICE_DEFAULT_DBG_PRINT_EN;
if (!ice_workqueue) {
ice_workqueue = alloc_workqueue("ice-set-key",
WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_FREEZABLE, 0);
if (!ice_workqueue) {
dev_err(ufs_dev, "%s: workqueue allocation failed.\n",
__func__);
err = -ENOMEM;
goto out;
}
}
if (ice_workqueue) {
if (!qcom_host->is_ice_cfg_work_set) {
INIT_WORK(&qcom_host->ice_cfg_work,
ufs_qcom_ice_cfg_work);
qcom_host->is_ice_cfg_work_set = true;
}
}
out:
return err;
}
static inline bool ufs_qcom_is_data_cmd(char cmd_op, bool is_write)
{
if (is_write) {
if (cmd_op == WRITE_6 || cmd_op == WRITE_10 ||
cmd_op == WRITE_16)
return true;
} else {
if (cmd_op == READ_6 || cmd_op == READ_10 ||
cmd_op == READ_16)
return true;
}
return false;
}
int ufs_qcom_ice_req_setup(struct ufs_qcom_host *qcom_host,
struct scsi_cmnd *cmd, u8 *cc_index, bool *enable)
{
struct ice_data_setting ice_set;
char cmd_op = cmd->cmnd[0];
int err;
unsigned long flags;
if (!qcom_host->ice.pdev || !qcom_host->ice.vops) {
dev_dbg(qcom_host->hba->dev, "%s: ice device is not enabled\n",
__func__);
return 0;
}
if (qcom_host->ice.vops->config_start) {
memset(&ice_set, 0, sizeof(ice_set));
spin_lock_irqsave(
&qcom_host->ice_work_lock, flags);
err = qcom_host->ice.vops->config_start(qcom_host->ice.pdev,
cmd->request, &ice_set, true);
if (err) {
/*
* config_start() returns -EAGAIN when a key slot is
* available but still not configured. As configuration
* requires a non-atomic context, this means we should
* call the function again from the worker thread to do
* the configuration. For this request the error will
* propagate so it will be re-queued.
*/
if (err == -EAGAIN) {
if (!ice_workqueue) {
spin_unlock_irqrestore(
&qcom_host->ice_work_lock,
flags);
dev_err(qcom_host->hba->dev,
"%s: error %d workqueue NULL\n",
__func__, err);
return -EINVAL;
}
dev_dbg(qcom_host->hba->dev,
"%s: scheduling task for ice setup\n",
__func__);
if (!qcom_host->work_pending) {
qcom_host->req_pending = cmd->request;
if (!queue_work(ice_workqueue,
&qcom_host->ice_cfg_work)) {
qcom_host->req_pending = NULL;
spin_unlock_irqrestore(
&qcom_host->ice_work_lock,
flags);
return err;
}
qcom_host->work_pending = true;
}
} else {
if (err != -EBUSY)
dev_err(qcom_host->hba->dev,
"%s: error in ice_vops->config %d\n",
__func__, err);
}
spin_unlock_irqrestore(&qcom_host->ice_work_lock,
flags);
return err;
}
spin_unlock_irqrestore(&qcom_host->ice_work_lock, flags);
if (ufs_qcom_is_data_cmd(cmd_op, true))
*enable = !ice_set.encr_bypass;
else if (ufs_qcom_is_data_cmd(cmd_op, false))
*enable = !ice_set.decr_bypass;
if (ice_set.crypto_data.key_index >= 0)
*cc_index = (u8)ice_set.crypto_data.key_index;
}
return 0;
}
/**
* ufs_qcom_ice_cfg_start() - starts configuring UFS's ICE registers
* for an ICE transaction
* @qcom_host: Pointer to a UFS QCom internal host structure.
* qcom_host, qcom_host->hba and qcom_host->hba->dev should all
* be valid pointers.
* @cmd: Pointer to a valid scsi command. cmd->request should also be
* a valid pointer.
*
* Return: -EINVAL in-case of an error
* 0 otherwise
*/
int ufs_qcom_ice_cfg_start(struct ufs_qcom_host *qcom_host,
struct scsi_cmnd *cmd)
{
struct device *dev = qcom_host->hba->dev;
int err = 0;
struct ice_data_setting ice_set;
unsigned int slot = 0;
sector_t lba = 0;
unsigned int ctrl_info_val = 0;
unsigned int bypass = 0;
struct request *req;
char cmd_op;
unsigned long flags;
if (!qcom_host->ice.pdev || !qcom_host->ice.vops) {
dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
goto out;
}
if (qcom_host->ice.state != UFS_QCOM_ICE_STATE_ACTIVE) {
dev_err(dev, "%s: ice state (%d) is not active\n",
__func__, qcom_host->ice.state);
return -EINVAL;
}
if (qcom_host->hw_ver.major >= 0x3) {
/*
* ICE 3.0 crypto sequences were changed,
* CTRL_INFO register no longer exists
* and doesn't need to be configured.
* The configuration is done via utrd.
*/
return 0;
}
req = cmd->request;
if (req->bio)
lba = (req->bio->bi_iter.bi_sector) >>
UFS_QCOM_ICE_TR_DATA_UNIT_4_KB;
slot = req->tag;
if (slot < 0 || slot > qcom_host->hba->nutrs) {
dev_err(dev, "%s: slot (%d) is out of boundaries (0...%d)\n",
__func__, slot, qcom_host->hba->nutrs);
return -EINVAL;
}
memset(&ice_set, 0, sizeof(ice_set));
if (qcom_host->ice.vops->config_start) {
spin_lock_irqsave(
&qcom_host->ice_work_lock, flags);
err = qcom_host->ice.vops->config_start(qcom_host->ice.pdev,
req, &ice_set, true);
if (err) {
/*
* config_start() returns -EAGAIN when a key slot is
* available but still not configured. As configuration
* requires a non-atomic context, this means we should
* call the function again from the worker thread to do
* the configuration. For this request the error will
* propagate so it will be re-queued.
*/
if (err == -EAGAIN) {
if (!ice_workqueue) {
spin_unlock_irqrestore(
&qcom_host->ice_work_lock,
flags);
dev_err(qcom_host->hba->dev,
"%s: error %d workqueue NULL\n",
__func__, err);
return -EINVAL;
}
dev_dbg(qcom_host->hba->dev,
"%s: scheduling task for ice setup\n",
__func__);
if (!qcom_host->work_pending) {
qcom_host->req_pending = cmd->request;
if (!queue_work(ice_workqueue,
&qcom_host->ice_cfg_work)) {
qcom_host->req_pending = NULL;
spin_unlock_irqrestore(
&qcom_host->ice_work_lock,
flags);
return err;
}
qcom_host->work_pending = true;
}
} else {
if (err != -EBUSY)
dev_err(qcom_host->hba->dev,
"%s: error in ice_vops->config %d\n",
__func__, err);
}
spin_unlock_irqrestore(
&qcom_host->ice_work_lock, flags);
return err;
}
spin_unlock_irqrestore(
&qcom_host->ice_work_lock, flags);
}
cmd_op = cmd->cmnd[0];
#define UFS_QCOM_DIR_WRITE true
#define UFS_QCOM_DIR_READ false
/* if non data command, bypass shall be enabled */
if (!ufs_qcom_is_data_cmd(cmd_op, UFS_QCOM_DIR_WRITE) &&
!ufs_qcom_is_data_cmd(cmd_op, UFS_QCOM_DIR_READ))
bypass = UFS_QCOM_ICE_ENABLE_BYPASS;
/* if writing data command */
else if (ufs_qcom_is_data_cmd(cmd_op, UFS_QCOM_DIR_WRITE))
bypass = ice_set.encr_bypass ? UFS_QCOM_ICE_ENABLE_BYPASS :
UFS_QCOM_ICE_DISABLE_BYPASS;
/* if reading data command */
else if (ufs_qcom_is_data_cmd(cmd_op, UFS_QCOM_DIR_READ))
bypass = ice_set.decr_bypass ? UFS_QCOM_ICE_ENABLE_BYPASS :
UFS_QCOM_ICE_DISABLE_BYPASS;
/* Configure ICE index */
ctrl_info_val =
(ice_set.crypto_data.key_index &
MASK_UFS_QCOM_ICE_CTRL_INFO_KEY_INDEX)
<< OFFSET_UFS_QCOM_ICE_CTRL_INFO_KEY_INDEX;
/* Configure data unit size of transfer request */
ctrl_info_val |=
UFS_QCOM_ICE_TR_DATA_UNIT_4_KB
<< OFFSET_UFS_QCOM_ICE_CTRL_INFO_CDU;
/* Configure ICE bypass mode */
ctrl_info_val |=
(bypass & MASK_UFS_QCOM_ICE_CTRL_INFO_BYPASS)
<< OFFSET_UFS_QCOM_ICE_CTRL_INFO_BYPASS;
if (qcom_host->hw_ver.major == 0x1) {
ufshcd_writel(qcom_host->hba, lba,
(REG_UFS_QCOM_ICE_CTRL_INFO_1_n + 8 * slot));
ufshcd_writel(qcom_host->hba, ctrl_info_val,
(REG_UFS_QCOM_ICE_CTRL_INFO_2_n + 8 * slot));
}
if (qcom_host->hw_ver.major == 0x2) {
ufshcd_writel(qcom_host->hba, (lba & 0xFFFFFFFF),
(REG_UFS_QCOM_ICE_CTRL_INFO_1_n + 16 * slot));
ufshcd_writel(qcom_host->hba, ((lba >> 32) & 0xFFFFFFFF),
(REG_UFS_QCOM_ICE_CTRL_INFO_2_n + 16 * slot));
ufshcd_writel(qcom_host->hba, ctrl_info_val,
(REG_UFS_QCOM_ICE_CTRL_INFO_3_n + 16 * slot));
}
/*
* Ensure UFS-ICE registers are being configured
* before next operation, otherwise UFS Host Controller might
* set get errors
*/
mb();
out:
return err;
}
/**
* ufs_qcom_ice_cfg_end() - finishes configuring UFS's ICE registers
* for an ICE transaction
* @qcom_host: Pointer to a UFS QCom internal host structure.
* qcom_host, qcom_host->hba and
* qcom_host->hba->dev should all
* be valid pointers.
* @cmd: Pointer to a valid scsi command. cmd->request should also be
* a valid pointer.
*
* Return: -EINVAL in-case of an error
* 0 otherwise
*/
int ufs_qcom_ice_cfg_end(struct ufs_qcom_host *qcom_host, struct request *req)
{
int err = 0;
struct device *dev = qcom_host->hba->dev;
if (qcom_host->ice.vops->config_end) {
err = qcom_host->ice.vops->config_end(qcom_host->ice.pdev, req);
if (err) {
dev_err(dev, "%s: error in ice_vops->config_end %d\n",
__func__, err);
return err;
}
}
return 0;
}
/**
* ufs_qcom_ice_reset() - resets UFS-ICE interface and ICE device
* @qcom_host: Pointer to a UFS QCom internal host structure.
* qcom_host, qcom_host->hba and qcom_host->hba->dev should all
* be valid pointers.
*
* Return: -EINVAL in-case of an error
* 0 otherwise
*/
int ufs_qcom_ice_reset(struct ufs_qcom_host *qcom_host)
{
struct device *dev = qcom_host->hba->dev;
int err = 0;
if (!qcom_host->ice.pdev) {
dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
goto out;
}
if (!qcom_host->ice.vops) {
dev_err(dev, "%s: invalid ice_vops\n", __func__);
return -EINVAL;
}
if (qcom_host->ice.state != UFS_QCOM_ICE_STATE_ACTIVE)
goto out;
if (qcom_host->ice.vops->reset) {
err = qcom_host->ice.vops->reset(qcom_host->ice.pdev);
if (err) {
dev_err(dev, "%s: ice_vops->reset failed. err %d\n",
__func__, err);
goto out;
}
}
if (qcom_host->ice.state != UFS_QCOM_ICE_STATE_ACTIVE) {
dev_err(qcom_host->hba->dev,
"%s: error. ice.state (%d) is not in active state\n",
__func__, qcom_host->ice.state);
err = -EINVAL;
}
out:
return err;
}
/**
* ufs_qcom_ice_resume() - resumes UFS-ICE interface and ICE device from power
* collapse
* @qcom_host: Pointer to a UFS QCom internal host structure.
* qcom_host, qcom_host->hba and qcom_host->hba->dev should all
* be valid pointers.
*
* Return: -EINVAL in-case of an error
* 0 otherwise
*/
int ufs_qcom_ice_resume(struct ufs_qcom_host *qcom_host)
{
struct device *dev = qcom_host->hba->dev;
int err = 0;
if (!qcom_host->ice.pdev) {
dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
goto out;
}
if (qcom_host->ice.state !=
UFS_QCOM_ICE_STATE_SUSPENDED) {
goto out;
}
if (!qcom_host->ice.vops) {
dev_err(dev, "%s: invalid ice_vops\n", __func__);
return -EINVAL;
}
if (qcom_host->ice.vops->resume) {
err = qcom_host->ice.vops->resume(qcom_host->ice.pdev);
if (err) {
dev_err(dev, "%s: ice_vops->resume failed. err %d\n",
__func__, err);
return err;
}
}
qcom_host->ice.state = UFS_QCOM_ICE_STATE_ACTIVE;
out:
return err;
}
/**
* ufs_qcom_is_ice_busy() - lets the caller of the function know if
* there is any ongoing operation in ICE in workqueue context.
* @qcom_host: Pointer to a UFS QCom internal host structure.
* qcom_host should be a valid pointer.
*
* Return: 1 if ICE is busy, 0 if it is free.
* -EINVAL in case of error.
*/
int ufs_qcom_is_ice_busy(struct ufs_qcom_host *qcom_host)
{
if (!qcom_host) {
pr_err("%s: invalid qcom_host\n", __func__);
return -EINVAL;
}
if (qcom_host->req_pending)
return 1;
else
return 0;
}
/**
* ufs_qcom_ice_suspend() - suspends UFS-ICE interface and ICE device
* @qcom_host: Pointer to a UFS QCom internal host structure.
* qcom_host, qcom_host->hba and qcom_host->hba->dev should all
* be valid pointers.
*
* Return: -EINVAL in-case of an error
* 0 otherwise
*/
int ufs_qcom_ice_suspend(struct ufs_qcom_host *qcom_host)
{
struct device *dev = qcom_host->hba->dev;
int err = 0;
if (!qcom_host->ice.pdev) {
dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
goto out;
}
if (qcom_host->ice.vops->suspend) {
err = qcom_host->ice.vops->suspend(qcom_host->ice.pdev);
if (err) {
dev_err(qcom_host->hba->dev,
"%s: ice_vops->suspend failed. err %d\n",
__func__, err);
return -EINVAL;
}
}
if (qcom_host->ice.state == UFS_QCOM_ICE_STATE_ACTIVE) {
qcom_host->ice.state = UFS_QCOM_ICE_STATE_SUSPENDED;
} else if (qcom_host->ice.state == UFS_QCOM_ICE_STATE_DISABLED) {
dev_err(qcom_host->hba->dev,
"%s: ice state is invalid: disabled\n",
__func__);
err = -EINVAL;
}
out:
return err;
}
/**
* ufs_qcom_ice_get_status() - returns the status of an ICE transaction
* @qcom_host: Pointer to a UFS QCom internal host structure.
* qcom_host, qcom_host->hba and qcom_host->hba->dev should all
* be valid pointers.
* @ice_status: Pointer to a valid output parameter.
* < 0 in case of ICE transaction failure.
* 0 otherwise.
*
* Return: -EINVAL in-case of an error
* 0 otherwise
*/
int ufs_qcom_ice_get_status(struct ufs_qcom_host *qcom_host, int *ice_status)
{
struct device *dev = NULL;
int err = 0;
int stat = -EINVAL;
*ice_status = 0;
dev = qcom_host->hba->dev;
if (!dev) {
err = -EINVAL;
goto out;
}
if (!qcom_host->ice.pdev) {
dev_dbg(dev, "%s: ice device is not enabled\n", __func__);
goto out;
}
if (qcom_host->ice.state != UFS_QCOM_ICE_STATE_ACTIVE) {
err = -EINVAL;
goto out;
}
if (!qcom_host->ice.vops) {
dev_err(dev, "%s: invalid ice_vops\n", __func__);
return -EINVAL;
}
if (qcom_host->ice.vops->status) {
stat = qcom_host->ice.vops->status(qcom_host->ice.pdev);
if (stat < 0) {
dev_err(dev, "%s: ice_vops->status failed. stat %d\n",
__func__, stat);
err = -EINVAL;
goto out;
}
*ice_status = stat;
}
out:
return err;
}

View File

@ -1,137 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2014-2019, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
*/
#ifndef _UFS_QCOM_ICE_H_
#define _UFS_QCOM_ICE_H_
#include <scsi/scsi_cmnd.h>
#include "ufs-qcom.h"
/*
* UFS host controller ICE registers. There are n [0..31]
* of each of these registers
*/
enum {
REG_UFS_QCOM_ICE_CFG = 0x2200,
REG_UFS_QCOM_ICE_CTRL_INFO_1_n = 0x2204,
REG_UFS_QCOM_ICE_CTRL_INFO_2_n = 0x2208,
REG_UFS_QCOM_ICE_CTRL_INFO_3_n = 0x220C,
};
#define NUM_QCOM_ICE_CTRL_INFO_n_REGS 32
/* UFS QCOM ICE CTRL Info register offset */
enum {
OFFSET_UFS_QCOM_ICE_CTRL_INFO_BYPASS = 0,
OFFSET_UFS_QCOM_ICE_CTRL_INFO_KEY_INDEX = 0x1,
OFFSET_UFS_QCOM_ICE_CTRL_INFO_CDU = 0x6,
};
/* UFS QCOM ICE CTRL Info register masks */
enum {
MASK_UFS_QCOM_ICE_CTRL_INFO_BYPASS = 0x1,
MASK_UFS_QCOM_ICE_CTRL_INFO_KEY_INDEX = 0x1F,
MASK_UFS_QCOM_ICE_CTRL_INFO_CDU = 0x8,
};
/* UFS QCOM ICE encryption/decryption bypass state */
enum {
UFS_QCOM_ICE_DISABLE_BYPASS = 0,
UFS_QCOM_ICE_ENABLE_BYPASS = 1,
};
/* UFS QCOM ICE Crypto Data Unit of target DUN of Transfer Request */
enum {
UFS_QCOM_ICE_TR_DATA_UNIT_512_B = 0,
UFS_QCOM_ICE_TR_DATA_UNIT_1_KB = 1,
UFS_QCOM_ICE_TR_DATA_UNIT_2_KB = 2,
UFS_QCOM_ICE_TR_DATA_UNIT_4_KB = 3,
UFS_QCOM_ICE_TR_DATA_UNIT_8_KB = 4,
UFS_QCOM_ICE_TR_DATA_UNIT_16_KB = 5,
UFS_QCOM_ICE_TR_DATA_UNIT_32_KB = 6,
};
/* UFS QCOM ICE internal state */
enum {
UFS_QCOM_ICE_STATE_DISABLED = 0,
UFS_QCOM_ICE_STATE_ACTIVE = 1,
UFS_QCOM_ICE_STATE_SUSPENDED = 2,
};
#ifdef CONFIG_SCSI_UFS_QCOM_ICE
int ufs_qcom_ice_get_dev(struct ufs_qcom_host *qcom_host);
int ufs_qcom_ice_init(struct ufs_qcom_host *qcom_host);
int ufs_qcom_ice_req_setup(struct ufs_qcom_host *qcom_host,
struct scsi_cmnd *cmd, u8 *cc_index, bool *enable);
int ufs_qcom_ice_cfg_start(struct ufs_qcom_host *qcom_host,
struct scsi_cmnd *cmd);
int ufs_qcom_ice_cfg_end(struct ufs_qcom_host *qcom_host,
struct request *req);
int ufs_qcom_ice_reset(struct ufs_qcom_host *qcom_host);
int ufs_qcom_ice_resume(struct ufs_qcom_host *qcom_host);
int ufs_qcom_ice_suspend(struct ufs_qcom_host *qcom_host);
int ufs_qcom_ice_get_status(struct ufs_qcom_host *qcom_host, int *ice_status);
void ufs_qcom_ice_print_regs(struct ufs_qcom_host *qcom_host);
int ufs_qcom_is_ice_busy(struct ufs_qcom_host *qcom_host);
#else
inline int ufs_qcom_ice_get_dev(struct ufs_qcom_host *qcom_host)
{
if (qcom_host) {
qcom_host->ice.pdev = NULL;
qcom_host->ice.vops = NULL;
}
return -ENODEV;
}
inline int ufs_qcom_ice_init(struct ufs_qcom_host *qcom_host)
{
return 0;
}
inline int ufs_qcom_ice_cfg_start(struct ufs_qcom_host *qcom_host,
struct scsi_cmnd *cmd)
{
return 0;
}
inline int ufs_qcom_ice_cfg_end(struct ufs_qcom_host *qcom_host,
struct request *req)
{
return 0;
}
inline int ufs_qcom_ice_reset(struct ufs_qcom_host *qcom_host)
{
return 0;
}
inline int ufs_qcom_ice_resume(struct ufs_qcom_host *qcom_host)
{
return 0;
}
inline int ufs_qcom_ice_suspend(struct ufs_qcom_host *qcom_host)
{
return 0;
}
inline int ufs_qcom_ice_get_status(struct ufs_qcom_host *qcom_host,
int *ice_status)
{
return 0;
}
inline void ufs_qcom_ice_print_regs(struct ufs_qcom_host *qcom_host)
{
}
static inline int ufs_qcom_is_ice_busy(struct ufs_qcom_host *qcom_host)
{
return 0;
}
#endif /* CONFIG_SCSI_UFS_QCOM_ICE */
#endif /* UFS_QCOM_ICE_H_ */

View File

@ -28,7 +28,6 @@
#include "unipro.h"
#include "ufs-qcom.h"
#include "ufshci.h"
#include "ufs-qcom-ice.h"
#include "ufs-qcom-debugfs.h"
#include "ufs_quirks.h"
@ -408,15 +407,6 @@ static int ufs_qcom_hce_enable_notify(struct ufs_hba *hba,
* is initialized.
*/
err = ufs_qcom_enable_lane_clks(host);
if (!err && host->ice.pdev) {
err = ufs_qcom_ice_init(host);
if (err) {
dev_err(hba->dev, "%s: ICE init failed (%d)\n",
__func__, err);
err = -EINVAL;
}
}
break;
case POST_CHANGE:
/* check if UFS PHY moved from DISABLED to HIBERN8 */
@ -847,11 +837,11 @@ static int ufs_qcom_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)
if (host->vddp_ref_clk && ufs_qcom_is_link_off(hba))
ret = ufs_qcom_disable_vreg(hba->dev,
host->vddp_ref_clk);
if (host->vccq_parent && !hba->auto_bkops_enabled)
ufs_qcom_config_vreg(hba->dev,
host->vccq_parent, false);
ufs_qcom_ice_suspend(host);
if (ufs_qcom_is_link_off(hba)) {
/* Assert PHY soft reset */
ufs_qcom_assert_reset(hba);
@ -891,13 +881,6 @@ static int ufs_qcom_resume(struct ufs_hba *hba, enum ufs_pm_op pm_op)
if (err)
goto out;
err = ufs_qcom_ice_resume(host);
if (err) {
dev_err(hba->dev, "%s: ufs_qcom_ice_resume failed, err = %d\n",
__func__, err);
goto out;
}
hba->is_sys_suspended = false;
out:
@ -937,104 +920,6 @@ out:
return ret;
}
#ifdef CONFIG_SCSI_UFS_QCOM_ICE
static int ufs_qcom_crypto_req_setup(struct ufs_hba *hba,
struct ufshcd_lrb *lrbp, u8 *cc_index, bool *enable, u64 *dun)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
struct request *req;
int ret;
if (lrbp->cmd && lrbp->cmd->request)
req = lrbp->cmd->request;
else
return 0;
/* Use request LBA or given dun as the DUN value */
if (req->bio) {
#ifdef CONFIG_PFK
if (bio_dun(req->bio)) {
/* dun @bio can be split, so we have to adjust offset */
*dun = bio_dun(req->bio);
} else {
*dun = req->bio->bi_iter.bi_sector;
*dun >>= UFS_QCOM_ICE_TR_DATA_UNIT_4_KB;
}
#else
*dun = req->bio->bi_iter.bi_sector;
*dun >>= UFS_QCOM_ICE_TR_DATA_UNIT_4_KB;
#endif
}
ret = ufs_qcom_ice_req_setup(host, lrbp->cmd, cc_index, enable);
return ret;
}
static
int ufs_qcom_crytpo_engine_cfg_start(struct ufs_hba *hba, unsigned int task_tag)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
struct ufshcd_lrb *lrbp = &hba->lrb[task_tag];
int err = 0;
if (!host->ice.pdev ||
!lrbp->cmd ||
(lrbp->command_type != UTP_CMD_TYPE_SCSI &&
lrbp->command_type != UTP_CMD_TYPE_UFS_STORAGE))
goto out;
err = ufs_qcom_ice_cfg_start(host, lrbp->cmd);
out:
return err;
}
static
int ufs_qcom_crytpo_engine_cfg_end(struct ufs_hba *hba,
struct ufshcd_lrb *lrbp, struct request *req)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
int err = 0;
if (!host->ice.pdev || (lrbp->command_type != UTP_CMD_TYPE_SCSI &&
lrbp->command_type != UTP_CMD_TYPE_UFS_STORAGE))
goto out;
err = ufs_qcom_ice_cfg_end(host, req);
out:
return err;
}
static
int ufs_qcom_crytpo_engine_reset(struct ufs_hba *hba)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
int err = 0;
if (!host->ice.pdev)
goto out;
err = ufs_qcom_ice_reset(host);
out:
return err;
}
static int ufs_qcom_crypto_engine_get_status(struct ufs_hba *hba, u32 *status)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
if (!status)
return -EINVAL;
return ufs_qcom_ice_get_status(host, status);
}
#else /* !CONFIG_SCSI_UFS_QCOM_ICE */
#define ufs_qcom_crypto_req_setup NULL
#define ufs_qcom_crytpo_engine_cfg_start NULL
#define ufs_qcom_crytpo_engine_cfg_end NULL
#define ufs_qcom_crytpo_engine_reset NULL
#define ufs_qcom_crypto_engine_get_status NULL
#endif /* CONFIG_SCSI_UFS_QCOM_ICE */
struct ufs_qcom_dev_params {
u32 pwm_rx_gear; /* pwm rx gear to work in */
u32 pwm_tx_gear; /* pwm tx gear to work in */
@ -1642,14 +1527,7 @@ static int ufs_qcom_setup_clocks(struct ufs_hba *hba, bool on,
if (ufshcd_is_hs_mode(&hba->pwr_info))
ufs_qcom_dev_ref_clk_ctrl(host, true);
err = ufs_qcom_ice_resume(host);
if (err)
goto out;
} else if (!on && (status == PRE_CHANGE)) {
err = ufs_qcom_ice_suspend(host);
if (err)
goto out;
/*
* If auto hibern8 is enabled then the link will already
* be in hibern8 state and the ref clock can be gated.
@ -2227,36 +2105,9 @@ static int ufs_qcom_init(struct ufs_hba *hba)
/* Make a two way bind between the qcom host and the hba */
host->hba = hba;
spin_lock_init(&host->ice_work_lock);
ufshcd_set_variant(hba, host);
err = ufs_qcom_ice_get_dev(host);
if (err == -EPROBE_DEFER) {
/*
* UFS driver might be probed before ICE driver does.
* In that case we would like to return EPROBE_DEFER code
* in order to delay its probing.
*/
dev_err(dev, "%s: required ICE device not probed yet err = %d\n",
__func__, err);
goto out_variant_clear;
} else if (err == -ENODEV) {
/*
* ICE device is not enabled in DTS file. No need for further
* initialization of ICE driver.
*/
dev_warn(dev, "%s: ICE device is not enabled\n",
__func__);
} else if (err) {
dev_err(dev, "%s: ufs_qcom_ice_get_dev failed %d\n",
__func__, err);
goto out_variant_clear;
} else {
hba->host->inlinecrypt_support = 1;
}
host->generic_phy = devm_phy_get(dev, "ufsphy");
if (host->generic_phy == ERR_PTR(-EPROBE_DEFER)) {
@ -2832,7 +2683,6 @@ static void ufs_qcom_dump_dbg_regs(struct ufs_hba *hba, bool no_sleep)
usleep_range(1000, 1100);
ufs_qcom_phy_dbg_register_dump(phy);
usleep_range(1000, 1100);
ufs_qcom_ice_print_regs(host);
}
static u32 ufs_qcom_get_user_cap_mode(struct ufs_hba *hba)
@ -2869,14 +2719,6 @@ static struct ufs_hba_variant_ops ufs_hba_qcom_vops = {
.get_user_cap_mode = ufs_qcom_get_user_cap_mode,
};
static struct ufs_hba_crypto_variant_ops ufs_hba_crypto_variant_ops = {
.crypto_req_setup = ufs_qcom_crypto_req_setup,
.crypto_engine_cfg_start = ufs_qcom_crytpo_engine_cfg_start,
.crypto_engine_cfg_end = ufs_qcom_crytpo_engine_cfg_end,
.crypto_engine_reset = ufs_qcom_crytpo_engine_reset,
.crypto_engine_get_status = ufs_qcom_crypto_engine_get_status,
};
static struct ufs_hba_pm_qos_variant_ops ufs_hba_pm_qos_variant_ops = {
.req_start = ufs_qcom_pm_qos_req_start,
.req_end = ufs_qcom_pm_qos_req_end,
@ -2885,7 +2727,6 @@ static struct ufs_hba_pm_qos_variant_ops ufs_hba_pm_qos_variant_ops = {
static struct ufs_hba_variant ufs_hba_qcom_variant = {
.name = "qcom",
.vops = &ufs_hba_qcom_vops,
.crypto_vops = &ufs_hba_crypto_variant_ops,
.pm_qos_vops = &ufs_hba_pm_qos_variant_ops,
};

View File

@ -238,26 +238,6 @@ struct ufs_qcom_testbus {
u8 select_minor;
};
/**
* struct ufs_qcom_ice_data - ICE related information
* @vops: pointer to variant operations of ICE
* @async_done: completion for supporting ICE's driver asynchronous nature
* @pdev: pointer to the proper ICE platform device
* @state: UFS-ICE interface's internal state (see
* ufs-qcom-ice.h for possible internal states)
* @quirks: UFS-ICE interface related quirks
* @crypto_engine_err: crypto engine errors
*/
struct ufs_qcom_ice_data {
struct qcom_ice_variant_ops *vops;
struct platform_device *pdev;
int state;
u16 quirks;
bool crypto_engine_err;
};
#ifdef CONFIG_DEBUG_FS
struct qcom_debugfs_files {
struct dentry *debugfs_root;
@ -366,7 +346,6 @@ struct ufs_qcom_host {
bool disable_lpm;
bool is_lane_clks_enabled;
bool sec_cfg_updated;
struct ufs_qcom_ice_data ice;
void __iomem *dev_ref_clk_ctrl_mmio;
bool is_dev_ref_clk_enabled;
@ -381,9 +360,6 @@ struct ufs_qcom_host {
u32 dbg_print_en;
struct ufs_qcom_testbus testbus;
spinlock_t ice_work_lock;
struct work_struct ice_cfg_work;
bool is_ice_cfg_work_set;
struct request *req_pending;
struct ufs_vreg *vddp_ref_clk;
struct ufs_vreg *vccq_parent;

View File

@ -3366,41 +3366,6 @@ static void ufshcd_disable_intr(struct ufs_hba *hba, u32 intrs)
ufshcd_writel(hba, set, REG_INTERRUPT_ENABLE);
}
static int ufshcd_prepare_crypto_utrd(struct ufs_hba *hba,
struct ufshcd_lrb *lrbp)
{
struct utp_transfer_req_desc *req_desc = lrbp->utr_descriptor_ptr;
u8 cc_index = 0;
bool enable = false;
u64 dun = 0;
int ret;
/*
* Call vendor specific code to get crypto info for this request:
* enable, crypto config. index, DUN.
* If bypass is set, don't bother setting the other fields.
*/
ret = ufshcd_vops_crypto_req_setup(hba, lrbp, &cc_index, &enable, &dun);
if (ret) {
if (ret != -EAGAIN) {
dev_err(hba->dev,
"%s: failed to setup crypto request (%d)\n",
__func__, ret);
}
return ret;
}
if (!enable)
goto out;
req_desc->header.dword_0 |= cc_index | UTRD_CRYPTO_ENABLE;
req_desc->header.dword_1 = (u32)(dun & 0xFFFFFFFF);
req_desc->header.dword_3 = (u32)((dun >> 32) & 0xFFFFFFFF);
out:
return 0;
}
/**
* ufshcd_prepare_req_desc_hdr() - Fills the requests header
* descriptor according to request
@ -3449,9 +3414,6 @@ static int ufshcd_prepare_req_desc_hdr(struct ufs_hba *hba,
req_desc->prd_table_length = 0;
if (ufshcd_is_crypto_supported(hba))
return ufshcd_prepare_crypto_utrd(hba, lrbp);
return 0;
}
@ -3832,21 +3794,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
goto out;
}
err = ufshcd_vops_crypto_engine_cfg_start(hba, tag);
if (err) {
if (err != -EAGAIN)
dev_err(hba->dev,
"%s: failed to configure crypto engine %d\n",
__func__, err);
scsi_dma_unmap(lrbp->cmd);
lrbp->cmd = NULL;
clear_bit_unlock(tag, &hba->lrb_in_use);
ufshcd_release_all(hba);
ufshcd_vops_pm_qos_req_end(hba, cmd->request, true);
goto out;
}
/* Make sure descriptors are ready before ringing the doorbell */
wmb();
@ -3863,7 +3810,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
clear_bit_unlock(tag, &hba->lrb_in_use);
ufshcd_release_all(hba);
ufshcd_vops_pm_qos_req_end(hba, cmd->request, true);
ufshcd_vops_crypto_engine_cfg_end(hba, lrbp, cmd->request);
dev_err(hba->dev, "%s: failed sending command, %d\n",
__func__, err);
err = DID_ERROR;
@ -6530,8 +6476,6 @@ static void __ufshcd_transfer_req_compl(struct ufs_hba *hba,
*/
ufshcd_vops_pm_qos_req_end(hba, cmd->request,
false);
ufshcd_vops_crypto_engine_cfg_end(hba,
lrbp, cmd->request);
}
clear_bit_unlock(index, &hba->lrb_in_use);
@ -6599,8 +6543,6 @@ void ufshcd_abort_outstanding_transfer_requests(struct ufs_hba *hba, int result)
*/
ufshcd_vops_pm_qos_req_end(hba, cmd->request,
true);
ufshcd_vops_crypto_engine_cfg_end(hba,
lrbp, cmd->request);
}
clear_bit_unlock(index, &hba->lrb_in_use);
/* Do not touch lrbp after scsi done */
@ -7668,8 +7610,6 @@ static irqreturn_t ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status)
ufsdbg_error_inject_dispatcher(hba,
ERR_INJECT_INTR, intr_status, &intr_status);
ufshcd_vops_crypto_engine_get_status(hba, &hba->ce_error);
hba->errors = UFSHCD_ERROR_MASK & intr_status;
if (hba->errors || hba->ce_error)
retval |= ufshcd_check_errors(hba);
@ -8147,15 +8087,6 @@ static int ufshcd_host_reset_and_restore(struct ufs_hba *hba)
goto out;
}
if (!err) {
err = ufshcd_vops_crypto_engine_reset(hba);
if (err) {
dev_err(hba->dev,
"%s: failed to reset crypto engine %d\n",
__func__, err);
goto out;
}
}
out:
if (err)

View File

@ -370,31 +370,6 @@ struct ufs_hba_variant_ops {
#endif
};
/**
* struct ufs_hba_crypto_variant_ops - variant specific crypto callbacks
* @crypto_req_setup: retreieve the necessary cryptographic arguments to setup
a requests's transfer descriptor.
* @crypto_engine_cfg_start: start configuring cryptographic engine
* according to tag
* parameter
* @crypto_engine_cfg_end: end configuring cryptographic engine
* according to tag parameter
* @crypto_engine_reset: perform reset to the cryptographic engine
* @crypto_engine_get_status: get errors status of the cryptographic engine
*/
struct ufs_hba_crypto_variant_ops {
int (*crypto_req_setup)(struct ufs_hba *hba,
struct ufshcd_lrb *lrbp, u8 *cc_index,
bool *enable, u64 *dun);
int (*crypto_engine_cfg_start)(struct ufs_hba *hba,
unsigned int task_tag);
int (*crypto_engine_cfg_end)(struct ufs_hba *hba,
struct ufshcd_lrb *lrbp,
struct request *req);
int (*crypto_engine_reset)(struct ufs_hba *hba);
int (*crypto_engine_get_status)(struct ufs_hba *hba, u32 *status);
};
/**
* struct ufs_hba_pm_qos_variant_ops - variant specific PM QoS callbacks
*/
@ -412,7 +387,6 @@ struct ufs_hba_variant {
struct device *dev;
const char *name;
struct ufs_hba_variant_ops *vops;
struct ufs_hba_crypto_variant_ops *crypto_vops;
struct ufs_hba_pm_qos_variant_ops *pm_qos_vops;
};
@ -1539,55 +1513,6 @@ static inline void ufshcd_vops_remove_debugfs(struct ufs_hba *hba)
}
#endif
static inline int ufshcd_vops_crypto_req_setup(struct ufs_hba *hba,
struct ufshcd_lrb *lrbp, u8 *cc_index, bool *enable, u64 *dun)
{
if (hba->var && hba->var->crypto_vops &&
hba->var->crypto_vops->crypto_req_setup)
return hba->var->crypto_vops->crypto_req_setup(hba, lrbp,
cc_index, enable, dun);
return 0;
}
static inline int ufshcd_vops_crypto_engine_cfg_start(struct ufs_hba *hba,
unsigned int task_tag)
{
if (hba->var && hba->var->crypto_vops &&
hba->var->crypto_vops->crypto_engine_cfg_start)
return hba->var->crypto_vops->crypto_engine_cfg_start
(hba, task_tag);
return 0;
}
static inline int ufshcd_vops_crypto_engine_cfg_end(struct ufs_hba *hba,
struct ufshcd_lrb *lrbp,
struct request *req)
{
if (hba->var && hba->var->crypto_vops &&
hba->var->crypto_vops->crypto_engine_cfg_end)
return hba->var->crypto_vops->crypto_engine_cfg_end
(hba, lrbp, req);
return 0;
}
static inline int ufshcd_vops_crypto_engine_reset(struct ufs_hba *hba)
{
if (hba->var && hba->var->crypto_vops &&
hba->var->crypto_vops->crypto_engine_reset)
return hba->var->crypto_vops->crypto_engine_reset(hba);
return 0;
}
static inline int ufshcd_vops_crypto_engine_get_status(struct ufs_hba *hba,
u32 *status)
{
if (hba->var && hba->var->crypto_vops &&
hba->var->crypto_vops->crypto_engine_get_status)
return hba->var->crypto_vops->crypto_engine_get_status(hba,
status);
return 0;
}
static inline void ufshcd_vops_pm_qos_req_start(struct ufs_hba *hba,
struct request *req)
{

View File

@ -1,8 +1,4 @@
obj-$(CONFIG_FS_ENCRYPTION) += fscrypto.o
ccflags-y += -Ifs/ext4
ccflags-y += -Ifs/f2fs
fscrypto-y := crypto.o fname.o hooks.o keyinfo.o policy.o
fscrypto-$(CONFIG_BLOCK) += bio.o
fscrypto-$(CONFIG_PFK) += fscrypt_ice.o

View File

@ -33,17 +33,13 @@ static void __fscrypt_decrypt_bio(struct bio *bio, bool done)
bio_for_each_segment_all(bv, bio, i) {
struct page *page = bv->bv_page;
if (fscrypt_using_hardware_encryption(page->mapping->host)) {
int ret = fscrypt_decrypt_pagecache_blocks(page,
bv->bv_len,
bv->bv_offset);
if (ret)
SetPageError(page);
else if (done)
SetPageUptodate(page);
} else {
int ret = fscrypt_decrypt_pagecache_blocks(page,
bv->bv_len,
bv->bv_offset);
if (ret)
SetPageError(page);
else if (done)
SetPageUptodate(page);
}
if (done)
unlock_page(page);
}
@ -100,7 +96,7 @@ int fscrypt_zeroout_range(const struct inode *inode, pgoff_t lblk,
}
bio_set_dev(bio, inode->i_sb->s_bdev);
bio->bi_iter.bi_sector = pblk << (blockbits - 9);
bio_set_op_attrs(bio, REQ_OP_WRITE, REQ_NOENCRYPT);
bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
ret = bio_add_page(bio, ciphertext_page, blocksize, 0);
if (WARN_ON(ret != blocksize)) {
/* should never happen! */

View File

@ -1,153 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
*/
#include "fscrypt_ice.h"
int fscrypt_using_hardware_encryption(const struct inode *inode)
{
struct fscrypt_info *ci = inode->i_crypt_info;
return S_ISREG(inode->i_mode) && ci &&
ci->ci_data_mode == FS_ENCRYPTION_MODE_PRIVATE;
}
EXPORT_SYMBOL(fscrypt_using_hardware_encryption);
/*
* Retrieves encryption key from the inode
*/
char *fscrypt_get_ice_encryption_key(const struct inode *inode)
{
struct fscrypt_info *ci = NULL;
if (!inode)
return NULL;
ci = inode->i_crypt_info;
if (!ci)
return NULL;
return &(ci->ci_raw_key[0]);
}
/*
* Retrieves encryption salt from the inode
*/
char *fscrypt_get_ice_encryption_salt(const struct inode *inode)
{
struct fscrypt_info *ci = NULL;
if (!inode)
return NULL;
ci = inode->i_crypt_info;
if (!ci)
return NULL;
return &(ci->ci_raw_key[fscrypt_get_ice_encryption_key_size(inode)]);
}
/*
* returns true if the cipher mode in inode is AES XTS
*/
int fscrypt_is_aes_xts_cipher(const struct inode *inode)
{
struct fscrypt_info *ci = inode->i_crypt_info;
if (!ci)
return 0;
return (ci->ci_data_mode == FS_ENCRYPTION_MODE_PRIVATE);
}
/*
* returns true if encryption info in both inodes is equal
*/
bool fscrypt_is_ice_encryption_info_equal(const struct inode *inode1,
const struct inode *inode2)
{
char *key1 = NULL;
char *key2 = NULL;
char *salt1 = NULL;
char *salt2 = NULL;
if (!inode1 || !inode2)
return false;
if (inode1 == inode2)
return true;
/*
* both do not belong to ice, so we don't care, they are equal
* for us
*/
if (!fscrypt_should_be_processed_by_ice(inode1) &&
!fscrypt_should_be_processed_by_ice(inode2))
return true;
/* one belongs to ice, the other does not -> not equal */
if (fscrypt_should_be_processed_by_ice(inode1) ^
fscrypt_should_be_processed_by_ice(inode2))
return false;
key1 = fscrypt_get_ice_encryption_key(inode1);
key2 = fscrypt_get_ice_encryption_key(inode2);
salt1 = fscrypt_get_ice_encryption_salt(inode1);
salt2 = fscrypt_get_ice_encryption_salt(inode2);
/* key and salt should not be null by this point */
if (!key1 || !key2 || !salt1 || !salt2 ||
(fscrypt_get_ice_encryption_key_size(inode1) !=
fscrypt_get_ice_encryption_key_size(inode2)) ||
(fscrypt_get_ice_encryption_salt_size(inode1) !=
fscrypt_get_ice_encryption_salt_size(inode2)))
return false;
if ((memcmp(key1, key2,
fscrypt_get_ice_encryption_key_size(inode1)) == 0) &&
(memcmp(salt1, salt2,
fscrypt_get_ice_encryption_salt_size(inode1)) == 0))
return true;
return false;
}
void fscrypt_set_ice_dun(const struct inode *inode, struct bio *bio, u64 dun)
{
if (fscrypt_should_be_processed_by_ice(inode))
bio->bi_iter.bi_dun = dun;
}
EXPORT_SYMBOL(fscrypt_set_ice_dun);
void fscrypt_set_ice_skip(struct bio *bio, int bi_crypt_skip)
{
#ifdef CONFIG_DM_DEFAULT_KEY
bio->bi_crypt_skip = bi_crypt_skip;
#endif
}
EXPORT_SYMBOL(fscrypt_set_ice_skip);
/*
* This function will be used for filesystem when deciding to merge bios.
* Basic assumption is, if inline_encryption is set, single bio has to
* guarantee consecutive LBAs as well as ino|pg->index.
*/
bool fscrypt_mergeable_bio(struct bio *bio, u64 dun, bool bio_encrypted,
int bi_crypt_skip)
{
if (!bio)
return true;
#ifdef CONFIG_DM_DEFAULT_KEY
if (bi_crypt_skip != bio->bi_crypt_skip)
return false;
#endif
/* if both of them are not encrypted, no further check is needed */
if (!bio_dun(bio) && !bio_encrypted)
return true;
/* ICE allows only consecutive iv_key stream. */
return bio_end_dun(bio) == dun;
}
EXPORT_SYMBOL(fscrypt_mergeable_bio);

View File

@ -1,99 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
*/
#ifndef _FSCRYPT_ICE_H
#define _FSCRYPT_ICE_H
#include <linux/blkdev.h>
#include "fscrypt_private.h"
#if IS_ENABLED(CONFIG_FS_ENCRYPTION)
static inline bool fscrypt_should_be_processed_by_ice(const struct inode *inode)
{
if (!inode->i_sb->s_cop)
return false;
if (!inode->i_sb->s_cop->is_encrypted((struct inode *)inode))
return false;
return fscrypt_using_hardware_encryption(inode);
}
static inline int fscrypt_is_ice_capable(const struct super_block *sb)
{
return blk_queue_inlinecrypt(bdev_get_queue(sb->s_bdev));
}
int fscrypt_is_aes_xts_cipher(const struct inode *inode);
char *fscrypt_get_ice_encryption_key(const struct inode *inode);
char *fscrypt_get_ice_encryption_salt(const struct inode *inode);
bool fscrypt_is_ice_encryption_info_equal(const struct inode *inode1,
const struct inode *inode2);
static inline size_t fscrypt_get_ice_encryption_key_size(
const struct inode *inode)
{
return FS_AES_256_XTS_KEY_SIZE / 2;
}
static inline size_t fscrypt_get_ice_encryption_salt_size(
const struct inode *inode)
{
return FS_AES_256_XTS_KEY_SIZE / 2;
}
#else
static inline bool fscrypt_should_be_processed_by_ice(const struct inode *inode)
{
return false;
}
static inline int fscrypt_is_ice_capable(const struct super_block *sb)
{
return false;
}
static inline char *fscrypt_get_ice_encryption_key(const struct inode *inode)
{
return NULL;
}
static inline char *fscrypt_get_ice_encryption_salt(const struct inode *inode)
{
return NULL;
}
static inline size_t fscrypt_get_ice_encryption_key_size(
const struct inode *inode)
{
return 0;
}
static inline size_t fscrypt_get_ice_encryption_salt_size(
const struct inode *inode)
{
return 0;
}
static inline int fscrypt_is_xts_cipher(const struct inode *inode)
{
return 0;
}
static inline bool fscrypt_is_ice_encryption_info_equal(
const struct inode *inode1,
const struct inode *inode2)
{
return false;
}
static inline int fscrypt_is_aes_xts_cipher(const struct inode *inode)
{
return 0;
}
#endif
#endif /* _FSCRYPT_ICE_H */

View File

@ -14,18 +14,8 @@
#include <linux/fscrypt.h>
#include <crypto/hash.h>
#include <linux/pfk.h>
/* Encryption parameters */
#define FS_AES_128_ECB_KEY_SIZE 16
#define FS_AES_128_CBC_KEY_SIZE 16
#define FS_AES_128_CTS_KEY_SIZE 16
#define FS_AES_256_GCM_KEY_SIZE 32
#define FS_AES_256_CBC_KEY_SIZE 32
#define FS_AES_256_CTS_KEY_SIZE 32
#define FS_AES_256_XTS_KEY_SIZE 64
#define FS_KEY_DERIVATION_NONCE_SIZE 16
/**
@ -91,13 +81,11 @@ struct fscrypt_info {
struct fscrypt_master_key *ci_master_key;
/* fields from the fscrypt_context */
u8 ci_data_mode;
u8 ci_filename_mode;
u8 ci_flags;
u8 ci_master_key_descriptor[FS_KEY_DESCRIPTOR_SIZE];
u8 ci_nonce[FS_KEY_DERIVATION_NONCE_SIZE];
u8 ci_raw_key[FS_MAX_KEY_SIZE];
};
typedef enum {
@ -122,10 +110,6 @@ static inline bool fscrypt_valid_enc_modes(u32 contents_mode,
filenames_mode == FS_ENCRYPTION_MODE_ADIANTUM)
return true;
if (contents_mode == FS_ENCRYPTION_MODE_PRIVATE &&
filenames_mode == FS_ENCRYPTION_MODE_AES_256_CTS)
return true;
return false;
}
@ -180,7 +164,6 @@ struct fscrypt_mode {
int ivsize;
bool logged_impl_name;
bool needs_essiv;
bool inline_encryption;
};
extern void __exit fscrypt_essiv_cleanup(void);

View File

@ -17,7 +17,6 @@
#include <crypto/sha.h>
#include <crypto/skcipher.h>
#include "fscrypt_private.h"
#include "fscrypt_ice.h"
static struct crypto_shash *essiv_hash_tfm;
@ -161,20 +160,11 @@ static struct fscrypt_mode available_modes[] = {
.keysize = 32,
.ivsize = 32,
},
[FS_ENCRYPTION_MODE_PRIVATE] = {
.friendly_name = "ice",
.cipher_str = "xts(aes)",
.keysize = 64,
.ivsize = 16,
.inline_encryption = true,
},
};
static struct fscrypt_mode *
select_encryption_mode(const struct fscrypt_info *ci, const struct inode *inode)
{
struct fscrypt_mode *mode = NULL;
if (!fscrypt_valid_enc_modes(ci->ci_data_mode, ci->ci_filename_mode)) {
fscrypt_warn(inode->i_sb,
"inode %lu uses unsupported encryption modes (contents mode %d, filenames mode %d)",
@ -183,19 +173,8 @@ select_encryption_mode(const struct fscrypt_info *ci, const struct inode *inode)
return ERR_PTR(-EINVAL);
}
if (S_ISREG(inode->i_mode)) {
mode = &available_modes[ci->ci_data_mode];
if (IS_ERR(mode)) {
fscrypt_warn(inode->i_sb, "Invalid mode");
return ERR_PTR(-EINVAL);
}
if (mode->inline_encryption &&
!fscrypt_is_ice_capable(inode->i_sb)) {
fscrypt_warn(inode->i_sb, "ICE support not available");
return ERR_PTR(-EINVAL);
}
return mode;
}
if (S_ISREG(inode->i_mode))
return &available_modes[ci->ci_data_mode];
if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))
return &available_modes[ci->ci_filename_mode];
@ -240,9 +219,6 @@ static int find_and_derive_key(const struct inode *inode,
memcpy(derived_key, payload->raw, mode->keysize);
err = 0;
}
} else if (mode->inline_encryption) {
memcpy(derived_key, payload->raw, mode->keysize);
err = 0;
} else {
err = derive_key_aes(payload->raw, ctx, derived_key,
mode->keysize);
@ -518,21 +494,12 @@ static void put_crypt_info(struct fscrypt_info *ci)
if (ci->ci_master_key) {
put_master_key(ci->ci_master_key);
} else {
if (ci->ci_ctfm)
crypto_free_skcipher(ci->ci_ctfm);
if (ci->ci_essiv_tfm)
crypto_free_cipher(ci->ci_essiv_tfm);
crypto_free_skcipher(ci->ci_ctfm);
crypto_free_cipher(ci->ci_essiv_tfm);
}
memset(ci->ci_raw_key, 0, sizeof(ci->ci_raw_key));
kmem_cache_free(fscrypt_info_cachep, ci);
}
static int fscrypt_data_encryption_mode(struct inode *inode)
{
return fscrypt_should_be_processed_by_ice(inode) ?
FS_ENCRYPTION_MODE_PRIVATE : FS_ENCRYPTION_MODE_AES_256_XTS;
}
int fscrypt_get_encryption_info(struct inode *inode)
{
struct fscrypt_info *crypt_info;
@ -556,8 +523,7 @@ int fscrypt_get_encryption_info(struct inode *inode)
/* Fake up a context for an unencrypted directory */
memset(&ctx, 0, sizeof(ctx));
ctx.format = FS_ENCRYPTION_CONTEXT_FORMAT_V1;
ctx.contents_encryption_mode =
fscrypt_data_encryption_mode(inode);
ctx.contents_encryption_mode = FS_ENCRYPTION_MODE_AES_256_XTS;
ctx.filenames_encryption_mode = FS_ENCRYPTION_MODE_AES_256_CTS;
memset(ctx.master_key_descriptor, 0x42, FS_KEY_DESCRIPTOR_SIZE);
} else if (res != sizeof(ctx)) {
@ -602,13 +568,9 @@ int fscrypt_get_encryption_info(struct inode *inode)
if (res)
goto out;
if (!mode->inline_encryption) {
res = setup_crypto_transform(crypt_info, mode, raw_key, inode);
if (res)
goto out;
} else {
memcpy(crypt_info->ci_raw_key, raw_key, mode->keysize);
}
res = setup_crypto_transform(crypt_info, mode, raw_key, inode);
if (res)
goto out;
if (cmpxchg_release(&inode->i_crypt_info, NULL, crypt_info) == NULL)
crypt_info = NULL;

View File

@ -37,7 +37,6 @@
#include <linux/uio.h>
#include <linux/atomic.h>
#include <linux/prefetch.h>
#include <linux/fscrypt.h>
/*
* How many user pages to map in one call to get_user_pages(). This determines
@ -452,23 +451,6 @@ dio_bio_alloc(struct dio *dio, struct dio_submit *sdio,
sdio->logical_offset_in_bio = sdio->cur_page_fs_offset;
}
#ifdef CONFIG_PFK
static bool is_inode_filesystem_type(const struct inode *inode,
const char *fs_type)
{
if (!inode || !fs_type)
return false;
if (!inode->i_sb)
return false;
if (!inode->i_sb->s_type)
return false;
return (strcmp(inode->i_sb->s_type->name, fs_type) == 0);
}
#endif
/*
* In the AIO read case we speculatively dirty the pages before starting IO.
* During IO completion, any of these pages which happen to have been written
@ -491,17 +473,7 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio)
bio_set_pages_dirty(bio);
dio->bio_disk = bio->bi_disk;
#ifdef CONFIG_PFK
bio->bi_dio_inode = dio->inode;
/* iv sector for security/pfe/pfk_fscrypt.c and f2fs in fs/f2fs/f2fs.h */
#define PG_DUN_NEW(i, p) \
(((((u64)(i)->i_ino) & 0xffffffff) << 32) | ((p) & 0xffffffff))
if (is_inode_filesystem_type(dio->inode, "f2fs"))
fscrypt_set_ice_dun(dio->inode, bio, PG_DUN_NEW(dio->inode,
(sdio->logical_offset_in_bio >> PAGE_SHIFT)));
#endif
if (sdio->submit_io) {
sdio->submit_io(bio, dio->inode, sdio->logical_offset_in_bio);
dio->bio_cookie = BLK_QC_T_NONE;
@ -513,18 +485,6 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio)
sdio->logical_offset_in_bio = 0;
}
struct inode *dio_bio_get_inode(struct bio *bio)
{
struct inode *inode = NULL;
if (bio == NULL)
return NULL;
#ifdef CONFIG_PFK
inode = bio->bi_dio_inode;
#endif
return inode;
}
/*
* Release any resources in case of a failure
*/

View File

@ -106,16 +106,10 @@ config EXT4_ENCRYPTION
files
config EXT4_FS_ENCRYPTION
bool "Ext4 FS Encryption"
default n
bool
default y
depends on EXT4_ENCRYPTION
config EXT4_FS_ICE_ENCRYPTION
bool "Ext4 Encryption with ICE support"
default n
depends on EXT4_FS_ENCRYPTION
depends on PFK
config EXT4_DEBUG
bool "EXT4 debugging support"
depends on EXT4_FS

View File

@ -224,10 +224,7 @@ typedef struct ext4_io_end {
ssize_t size; /* size of the extent */
} ext4_io_end_t;
#define EXT4_IO_ENCRYPTED 1
struct ext4_io_submit {
unsigned int io_flags;
struct writeback_control *io_wbc;
struct bio *io_bio;
ext4_io_end_t *io_end;

View File

@ -1235,12 +1235,10 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len,
if (!buffer_uptodate(bh) && !buffer_delay(bh) &&
!buffer_unwritten(bh) &&
(block_start < from || block_end > to)) {
decrypt = IS_ENCRYPTED(inode) &&
S_ISREG(inode->i_mode) &&
!fscrypt_using_hardware_encryption(inode);
ll_rw_block(REQ_OP_READ, (decrypt ? REQ_NOENCRYPT : 0),
1, &bh);
ll_rw_block(REQ_OP_READ, 0, 1, &bh);
*wait_bh++ = bh;
decrypt = IS_ENCRYPTED(inode) &&
S_ISREG(inode->i_mode);
}
}
/*
@ -3806,14 +3804,10 @@ static ssize_t ext4_direct_IO_write(struct kiocb *iocb, struct iov_iter *iter)
get_block_func = ext4_dio_get_block_unwritten_async;
dio_flags = DIO_LOCKING;
}
#if defined(CONFIG_FS_ENCRYPTION)
WARN_ON(IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode)
&& !fscrypt_using_hardware_encryption(inode));
#endif
ret = __blockdev_direct_IO(iocb, inode,
inode->i_sb->s_bdev, iter,
get_block_func,
ext4_end_io_dio, NULL, dio_flags);
ret = __blockdev_direct_IO(iocb, inode, inode->i_sb->s_bdev, iter,
get_block_func, ext4_end_io_dio, NULL,
dio_flags);
if (ret > 0 && !overwrite && ext4_test_inode_state(inode,
EXT4_STATE_DIO_UNWRITTEN)) {
@ -3926,9 +3920,8 @@ static ssize_t ext4_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
ssize_t ret;
int rw = iov_iter_rw(iter);
#ifdef CONFIG_FS_ENCRYPTION
if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode)
&& !fscrypt_using_hardware_encryption(inode))
#ifdef CONFIG_EXT4_FS_ENCRYPTION
if (IS_ENCRYPTED(inode) && S_ISREG(inode->i_mode))
return 0;
#endif
if (fsverity_active(inode))
@ -4097,7 +4090,6 @@ static int __ext4_block_zero_page_range(handle_t *handle,
struct inode *inode = mapping->host;
struct buffer_head *bh;
struct page *page;
bool decrypt;
int err = 0;
page = find_or_create_page(mapping, from >> PAGE_SHIFT,
@ -4140,14 +4132,13 @@ static int __ext4_block_zero_page_range(handle_t *handle,
if (!buffer_uptodate(bh)) {
err = -EIO;
decrypt = S_ISREG(inode->i_mode) && IS_ENCRYPTED(inode) &&
!fscrypt_using_hardware_encryption(inode);
ll_rw_block(REQ_OP_READ, (decrypt ? REQ_NOENCRYPT : 0), 1, &bh);
ll_rw_block(REQ_OP_READ, 0, 1, &bh);
wait_on_buffer(bh);
/* Uhhuh. Read error. Complain and punt. */
if (!buffer_uptodate(bh))
goto unlock;
if (decrypt) {
if (S_ISREG(inode->i_mode) &&
IS_ENCRYPTED(inode)) {
/* We expect the key to be set. */
BUG_ON(!fscrypt_has_encryption_key(inode));
BUG_ON(blocksize != PAGE_SIZE);

View File

@ -344,8 +344,6 @@ void ext4_io_submit(struct ext4_io_submit *io)
int io_op_flags = io->io_wbc->sync_mode == WB_SYNC_ALL ?
REQ_SYNC : 0;
io->io_bio->bi_write_hint = io->io_end->inode->i_write_hint;
if (io->io_flags & EXT4_IO_ENCRYPTED)
io_op_flags |= REQ_NOENCRYPT;
bio_set_op_attrs(io->io_bio, REQ_OP_WRITE, io_op_flags);
submit_bio(io->io_bio);
}
@ -355,7 +353,6 @@ void ext4_io_submit(struct ext4_io_submit *io)
void ext4_io_submit_init(struct ext4_io_submit *io,
struct writeback_control *wbc)
{
io->io_flags = 0;
io->io_wbc = wbc;
io->io_bio = NULL;
io->io_end = NULL;
@ -476,8 +473,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
gfp_t gfp_flags = GFP_NOFS;
retry_encrypt:
if (!fscrypt_using_hardware_encryption(inode))
bounce_page = fscrypt_encrypt_pagecache_blocks(page,
bounce_page = fscrypt_encrypt_pagecache_blocks(page,
PAGE_SIZE,0, gfp_flags);
if (IS_ERR(bounce_page)) {
ret = PTR_ERR(bounce_page);
@ -498,8 +494,6 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
do {
if (!buffer_async_write(bh))
continue;
if (bounce_page)
io->io_flags |= EXT4_IO_ENCRYPTED;
ret = io_submit_add_bh(io, inode, bounce_page ?: page, bh);
if (ret) {
/*

View File

@ -397,7 +397,6 @@ int ext4_mpage_readpages(struct address_space *mapping,
}
if (bio == NULL) {
struct bio_post_read_ctx *ctx;
unsigned int flags = 0;
bio = bio_alloc(GFP_KERNEL,
min_t(int, nr_pages, BIO_MAX_PAGES));
@ -413,10 +412,8 @@ int ext4_mpage_readpages(struct address_space *mapping,
bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9);
bio->bi_end_io = mpage_end_io;
bio->bi_private = ctx;
if (is_readahead)
flags = flags | REQ_RAHEAD;
flags = flags | (ctx ? REQ_NOENCRYPT : 0);
bio_set_op_attrs(bio, REQ_OP_READ, flags);
bio_set_op_attrs(bio, REQ_OP_READ,
is_readahead ? REQ_RAHEAD : 0);
}
length = first_hole << blkbits;

View File

@ -71,7 +71,6 @@ static void ext4_mark_recovery_complete(struct super_block *sb,
static void ext4_clear_journal_err(struct super_block *sb,
struct ext4_super_block *es);
static int ext4_sync_fs(struct super_block *sb, int wait);
static void ext4_umount_end(struct super_block *sb, int flags);
static int ext4_remount(struct super_block *sb, int *flags, char *data);
static int ext4_statfs(struct dentry *dentry, struct kstatfs *buf);
static int ext4_unfreeze(struct super_block *sb);
@ -1347,11 +1346,6 @@ static bool ext4_dummy_context(struct inode *inode)
return DUMMY_ENCRYPTION_ENABLED(EXT4_SB(inode->i_sb));
}
static inline bool ext4_is_encrypted(struct inode *inode)
{
return IS_ENCRYPTED(inode);
}
static const struct fscrypt_operations ext4_cryptops = {
.key_prefix = "ext4:",
.get_context = ext4_get_context,
@ -1359,7 +1353,6 @@ static const struct fscrypt_operations ext4_cryptops = {
.dummy_context = ext4_dummy_context,
.empty_dir = ext4_empty_dir,
.max_namelen = EXT4_NAME_LEN,
.is_encrypted = ext4_is_encrypted,
};
#endif
@ -1427,7 +1420,6 @@ static const struct super_operations ext4_sops = {
.freeze_fs = ext4_freeze,
.unfreeze_fs = ext4_unfreeze,
.statfs = ext4_statfs,
.umount_end = ext4_umount_end,
.remount_fs = ext4_remount,
.show_options = ext4_show_options,
#ifdef CONFIG_QUOTA
@ -5266,25 +5258,6 @@ struct ext4_mount_options {
#endif
};
static void ext4_umount_end(struct super_block *sb, int flags)
{
/*
* this is called at the end of umount(2). If there is an unclosed
* namespace, ext4 won't do put_super() which triggers fsck in the
* next boot.
*/
if ((flags & MNT_FORCE) || atomic_read(&sb->s_active) > 1) {
ext4_msg(sb, KERN_ERR,
"errors=remount-ro for active namespaces on umount %x",
flags);
clear_opt(sb, ERRORS_PANIC);
set_opt(sb, ERRORS_RO);
/* to write the latest s_kbytes_written */
if (!(sb->s_flags & MS_RDONLY))
ext4_commit_super(sb, 1);
}
}
static int ext4_remount(struct super_block *sb, int *flags, char *data)
{
struct ext4_super_block *es;

View File

@ -514,7 +514,6 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
struct bio *bio;
struct page *page = fio->encrypted_page ?
fio->encrypted_page : fio->page;
struct inode *inode = fio->page->mapping->host;
if (!f2fs_is_valid_blkaddr(fio->sbi, fio->new_blkaddr,
fio->is_por ? META_POR : (__is_meta_io(fio) ?
@ -527,15 +526,14 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
/* Allocate a new bio */
bio = __bio_alloc(fio, 1);
if (f2fs_may_encrypt_bio(inode, fio))
fscrypt_set_ice_dun(inode, bio, PG_DUN(inode, fio->page));
fscrypt_set_ice_skip(bio, fio->encrypted_page ? 1 : 0);
if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) {
bio_put(bio);
return -EFAULT;
}
fio->op_flags |= fio->encrypted_page ? REQ_NOENCRYPT : 0;
if (fio->io_wbc && !is_read_io(fio->op))
wbc_account_io(fio->io_wbc, page, PAGE_SIZE);
bio_set_op_attrs(bio, fio->op, fio->op_flags);
inc_page_count(fio->sbi, is_read_io(fio->op) ?
@ -710,10 +708,6 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
struct bio *bio = *fio->bio;
struct page *page = fio->encrypted_page ?
fio->encrypted_page : fio->page;
struct inode *inode;
bool bio_encrypted;
int bi_crypt_skip;
u64 dun;
if (!f2fs_is_valid_blkaddr(fio->sbi, fio->new_blkaddr,
__is_meta_io(fio) ? META_GENERIC : DATA_GENERIC))
@ -722,29 +716,15 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
trace_f2fs_submit_page_bio(page, fio);
f2fs_trace_ios(fio, 0);
inode = fio->page->mapping->host;
dun = PG_DUN(inode, fio->page);
bi_crypt_skip = fio->encrypted_page ? 1 : 0;
bio_encrypted = f2fs_may_encrypt_bio(inode, fio);
fio->op_flags |= fio->encrypted_page ? REQ_NOENCRYPT : 0;
if (bio && !page_is_mergeable(fio->sbi, bio, *fio->last_block,
fio->new_blkaddr))
f2fs_submit_merged_ipu_write(fio->sbi, &bio, NULL);
/* ICE support */
if (bio && !fscrypt_mergeable_bio(bio, dun,
bio_encrypted, bi_crypt_skip))
f2fs_submit_merged_ipu_write(fio->sbi, &bio, NULL);
alloc_new:
if (!bio) {
bio = __bio_alloc(fio, BIO_MAX_PAGES);
bio_set_op_attrs(bio, fio->op, fio->op_flags);
if (bio_encrypted)
fscrypt_set_ice_dun(inode, bio, dun);
fscrypt_set_ice_skip(bio, bi_crypt_skip);
add_bio_entry(fio->sbi, bio, page, fio->temp);
} else {
if (add_ipu_page(fio->sbi, &bio, page))
@ -768,10 +748,6 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
enum page_type btype = PAGE_TYPE_OF_BIO(fio->type);
struct f2fs_bio_info *io = sbi->write_io[btype] + fio->temp;
struct page *bio_page;
struct inode *inode;
bool bio_encrypted;
int bi_crypt_skip;
u64 dun;
f2fs_bug_on(sbi, is_read_io(fio->op));
@ -792,11 +768,6 @@ next:
verify_fio_blkaddr(fio);
bio_page = fio->encrypted_page ? fio->encrypted_page : fio->page;
inode = fio->page->mapping->host;
dun = PG_DUN(inode, fio->page);
bi_crypt_skip = fio->encrypted_page ? 1 : 0;
bio_encrypted = f2fs_may_encrypt_bio(inode, fio);
fio->op_flags |= fio->encrypted_page ? REQ_NOENCRYPT : 0;
/* set submitted = true as a return value */
fio->submitted = true;
@ -806,11 +777,6 @@ next:
if (io->bio && !io_is_mergeable(sbi, io->bio, io, fio,
io->last_block_in_bio, fio->new_blkaddr))
__submit_merged_bio(io);
/* ICE support */
if (!fscrypt_mergeable_bio(io->bio, dun, bio_encrypted, bi_crypt_skip))
__submit_merged_bio(io);
alloc_new:
if (io->bio == NULL) {
if (F2FS_IO_ALIGNED(sbi) &&
@ -822,10 +788,6 @@ alloc_new:
}
io->bio = __bio_alloc(fio, BIO_MAX_PAGES);
if (bio_encrypted)
fscrypt_set_ice_dun(inode, io->bio, dun);
fscrypt_set_ice_skip(io->bio, bi_crypt_skip);
io->fio = *fio;
}
@ -871,11 +833,9 @@ static struct bio *f2fs_grab_read_bio(struct inode *inode, block_t blkaddr,
return ERR_PTR(-ENOMEM);
f2fs_target_device(sbi, blkaddr, bio);
bio->bi_end_io = f2fs_read_end_io;
op_flag |= IS_ENCRYPTED(inode) ? REQ_NOENCRYPT : 0;
bio_set_op_attrs(bio, REQ_OP_READ, op_flag);
if (f2fs_encrypted_file(inode) &&
!fscrypt_using_hardware_encryption(inode))
if (f2fs_encrypted_file(inode))
post_read_steps |= 1 << STEP_DECRYPT;
if (f2fs_need_verity(inode, first_idx))
@ -906,9 +866,6 @@ static int f2fs_submit_page_read(struct inode *inode, struct page *page,
if (IS_ERR(bio))
return PTR_ERR(bio);
if (f2fs_may_encrypt_bio(inode, NULL))
fscrypt_set_ice_dun(inode, bio, PG_DUN(inode, page));
/* wait for GCed page writeback via META_MAPPING */
f2fs_wait_on_block_writeback(inode, blkaddr);
@ -1375,7 +1332,6 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map,
if (map->m_next_extent)
*map->m_next_extent = pgofs + map->m_len;
/* for hardware encryption, but to avoid potential issue in future */
if (flag == F2FS_GET_BLOCK_DIO)
f2fs_wait_on_block_writeback_range(inode,
map->m_pblk, map->m_len);
@ -1540,7 +1496,6 @@ skip:
sync_out:
/* for hardware encryption, but to avoid potential issue in future */
if (flag == F2FS_GET_BLOCK_DIO && map->m_flags & F2FS_MAP_MAPPED)
f2fs_wait_on_block_writeback_range(inode,
map->m_pblk, map->m_len);
@ -1851,8 +1806,6 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page,
sector_t last_block;
sector_t last_block_in_file;
sector_t block_nr;
bool bio_encrypted;
u64 dun;
int ret = 0;
block_in_file = (sector_t)page_index(page);
@ -1924,13 +1877,6 @@ submit_and_realloc:
bio = NULL;
}
dun = PG_DUN(inode, page);
bio_encrypted = f2fs_may_encrypt_bio(inode, NULL);
if (!fscrypt_mergeable_bio(bio, dun, bio_encrypted, 0)) {
__submit_bio(F2FS_I_SB(inode), bio, DATA);
bio = NULL;
}
if (bio == NULL) {
bio = f2fs_grab_read_bio(inode, block_nr, nr_pages,
is_readahead ? REQ_RAHEAD : 0, page->index);
@ -1939,8 +1885,6 @@ submit_and_realloc:
bio = NULL;
goto out;
}
if (bio_encrypted)
fscrypt_set_ice_dun(inode, bio, dun);
}
/*
@ -2014,6 +1958,7 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
zero_user_segment(page, 0, PAGE_SIZE);
unlock_page(page);
}
next_page:
if (pages)
put_page(page);
@ -2069,8 +2014,6 @@ static int encrypt_one_page(struct f2fs_io_info *fio)
f2fs_wait_on_block_writeback(inode, fio->old_blkaddr);
retry_encrypt:
if (fscrypt_using_hardware_encryption(inode))
return 0;
fio->encrypted_page = fscrypt_encrypt_pagecache_blocks(fio->page,
PAGE_SIZE, 0,

View File

@ -3603,9 +3603,7 @@ static inline void f2fs_set_encrypted_inode(struct inode *inode)
*/
static inline bool f2fs_post_read_required(struct inode *inode)
{
return (f2fs_encrypted_file(inode)
&& !fscrypt_using_hardware_encryption(inode))
|| fsverity_active(inode);
return f2fs_encrypted_file(inode) || fsverity_active(inode);
}
#define F2FS_FEATURE_FUNCS(name, flagname) \
@ -3757,16 +3755,6 @@ static inline bool f2fs_force_buffered_io(struct inode *inode,
return false;
}
static inline bool f2fs_may_encrypt_bio(struct inode *inode,
struct f2fs_io_info *fio)
{
if (fio && (fio->type != DATA || fio->encrypted_page))
return false;
return (f2fs_encrypted_file(inode) &&
fscrypt_using_hardware_encryption(inode));
}
#ifdef CONFIG_F2FS_FAULT_INJECTION
extern void f2fs_build_fault_attr(struct f2fs_sb_info *sbi, unsigned int rate,
unsigned int type);

View File

@ -1064,27 +1064,6 @@ static void destroy_device_list(struct f2fs_sb_info *sbi)
kvfree(sbi->devs);
}
static void f2fs_umount_end(struct super_block *sb, int flags)
{
/*
* this is called at the end of umount(2). If there is an unclosed
* namespace, f2fs won't do put_super() which triggers fsck in the
* next boot.
*/
if ((flags & MNT_FORCE) || atomic_read(&sb->s_active) > 1) {
/* to write the latest kbytes_written */
if (!(sb->s_flags & MS_RDONLY)) {
struct f2fs_sb_info *sbi = F2FS_SB(sb);
struct cp_control cpc = {
.reason = CP_UMOUNT,
};
mutex_lock(&sbi->gc_mutex);
f2fs_write_checkpoint(F2FS_SB(sb), &cpc);
mutex_unlock(&sbi->gc_mutex);
}
}
}
static void f2fs_put_super(struct super_block *sb)
{
struct f2fs_sb_info *sbi = F2FS_SB(sb);
@ -2303,7 +2282,6 @@ static const struct super_operations f2fs_sops = {
#endif
.evict_inode = f2fs_evict_inode,
.put_super = f2fs_put_super,
.umount_end = f2fs_umount_end,
.sync_fs = f2fs_sync_fs,
.freeze_fs = f2fs_freeze,
.unfreeze_fs = f2fs_unfreeze,
@ -2344,11 +2322,6 @@ static bool f2fs_dummy_context(struct inode *inode)
return DUMMY_ENCRYPTION_ENABLED(F2FS_I_SB(inode));
}
static inline bool f2fs_is_encrypted(struct inode *inode)
{
return f2fs_encrypted_file(inode);
}
static const struct fscrypt_operations f2fs_cryptops = {
.key_prefix = "f2fs:",
.get_context = f2fs_get_context,
@ -2356,7 +2329,6 @@ static const struct fscrypt_operations f2fs_cryptops = {
.dummy_context = f2fs_dummy_context,
.empty_dir = f2fs_empty_dir,
.max_namelen = F2FS_NAME_LEN,
.is_encrypted = f2fs_is_encrypted,
};
#endif

View File

@ -3010,11 +3010,6 @@ int vfs_create2(struct vfsmount *mnt, struct inode *dir, struct dentry *dentry,
if (error)
return error;
error = dir->i_op->create(dir, dentry, mode, want_excl);
if (error)
return error;
error = security_inode_post_create(dir, dentry, mode);
if (error)
return error;
if (!error)
fsnotify_create(dir, dentry);
return error;
@ -3839,11 +3834,6 @@ int vfs_mknod2(struct vfsmount *mnt, struct inode *dir, struct dentry *dentry, u
return error;
error = dir->i_op->mknod(dir, dentry, mode, dev);
if (error)
return error;
error = security_inode_post_create(dir, dentry, mode);
if (error)
return error;
if (!error)
fsnotify_create(dir, dentry);
return error;

View File

@ -21,7 +21,6 @@
#include <linux/fs_struct.h> /* get_fs_root et.al. */
#include <linux/fsnotify.h> /* fsnotify_vfsmount_delete */
#include <linux/uaccess.h>
#include <linux/file.h>
#include <linux/proc_ns.h>
#include <linux/magic.h>
#include <linux/bootmem.h>
@ -1135,12 +1134,6 @@ static void delayed_mntput(struct work_struct *unused)
}
static DECLARE_DELAYED_WORK(delayed_mntput_work, delayed_mntput);
void flush_delayed_mntput_wait(void)
{
delayed_mntput(NULL);
flush_delayed_work(&delayed_mntput_work);
}
static void mntput_no_expire(struct mount *mnt)
{
rcu_read_lock();
@ -1657,7 +1650,6 @@ int ksys_umount(char __user *name, int flags)
struct mount *mnt;
int retval;
int lookup_flags = 0;
bool user_request = !(current->flags & PF_KTHREAD);
if (flags & ~(MNT_FORCE | MNT_DETACH | MNT_EXPIRE | UMOUNT_NOFOLLOW))
return -EINVAL;
@ -1683,36 +1675,12 @@ int ksys_umount(char __user *name, int flags)
if (flags & MNT_FORCE && !capable(CAP_SYS_ADMIN))
goto dput_and_out;
/* flush delayed_fput to put mnt_count */
if (user_request)
flush_delayed_fput_wait();
retval = do_umount(mnt, flags);
dput_and_out:
/* we mustn't call path_put() as that would clear mnt_expiry_mark */
dput(path.dentry);
if (user_request && (!retval || (flags & MNT_FORCE))) {
/* filesystem needs to handle unclosed namespaces */
if (mnt->mnt.mnt_sb->s_op->umount_end)
mnt->mnt.mnt_sb->s_op->umount_end(mnt->mnt.mnt_sb,
flags);
}
mntput_no_expire(mnt);
if (!user_request)
goto out;
if (!retval) {
/*
* If the last delayed_fput() is called during do_umount()
* and makes mnt_count zero, we need to guarantee to register
* delayed_mntput by waiting for delayed_fput work again.
*/
flush_delayed_fput_wait();
/* flush delayed_mntput_work to put sb->s_active */
flush_delayed_mntput_wait();
}
out:
return retval;
}

View File

@ -73,9 +73,6 @@
#define bio_sectors(bio) bvec_iter_sectors((bio)->bi_iter)
#define bio_end_sector(bio) bvec_iter_end_sector((bio)->bi_iter)
#define bio_dun(bio) ((bio)->bi_iter.bi_dun)
#define bio_duns(bio) (bio_sectors(bio) >> 3) /* 4KB unit */
#define bio_end_dun(bio) (bio_dun(bio) + bio_duns(bio))
/*
* Return the data direction, READ or WRITE.
@ -173,11 +170,6 @@ static inline void bio_advance_iter(struct bio *bio, struct bvec_iter *iter,
{
iter->bi_sector += bytes >> 9;
#ifdef CONFIG_PFK
if (iter->bi_dun)
iter->bi_dun += bytes >> 12;
#endif
if (bio_no_advance_iter(bio)) {
iter->bi_size -= bytes;
iter->bi_done += bytes;

View File

@ -187,13 +187,6 @@ struct bio {
struct bio_integrity_payload *bi_integrity; /* data integrity */
#endif
};
#ifdef CONFIG_PFK
/* Encryption key to use (NULL if none) */
const struct blk_encryption_key *bi_crypt_key;
#endif
#ifdef CONFIG_DM_DEFAULT_KEY
int bi_crypt_skip;
#endif
unsigned short bi_vcnt; /* how many bio_vec's */
@ -208,9 +201,7 @@ struct bio {
struct bio_vec *bi_io_vec; /* the actual vec list */
struct bio_set *bi_pool;
#ifdef CONFIG_PFK
struct inode *bi_dio_inode;
#endif
/*
* We can inline a number of vecs at the end of the bio, to avoid
* double allocations for a small number of bio_vecs. This member
@ -340,11 +331,6 @@ enum req_flag_bits {
/* for driver use */
__REQ_DRV,
__REQ_SWAP, /* swapping request. */
/* Android specific flags */
__REQ_NOENCRYPT, /*
* ok to not encrypt (already encrypted at fs
* level)
*/
__REQ_NR_BITS, /* stops here */
};
@ -363,10 +349,11 @@ enum req_flag_bits {
#define REQ_RAHEAD (1ULL << __REQ_RAHEAD)
#define REQ_BACKGROUND (1ULL << __REQ_BACKGROUND)
#define REQ_NOWAIT (1ULL << __REQ_NOWAIT)
#define REQ_NOUNMAP (1ULL << __REQ_NOUNMAP)
#define REQ_DRV (1ULL << __REQ_DRV)
#define REQ_SWAP (1ULL << __REQ_SWAP)
#define REQ_NOENCRYPT (1ULL << __REQ_NOENCRYPT)
#define REQ_FAILFAST_MASK \
(REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER)

View File

@ -161,7 +161,6 @@ struct request {
unsigned int __data_len; /* total data len */
int tag;
sector_t __sector; /* sector cursor */
u64 __dun; /* dun for UFS */
struct bio *bio;
struct bio *biotail;
@ -705,7 +704,6 @@ struct request_queue {
#define QUEUE_FLAG_REGISTERED 26 /* queue has been registered to a disk */
#define QUEUE_FLAG_SCSI_PASSTHROUGH 27 /* queue supports SCSI commands */
#define QUEUE_FLAG_QUIESCED 28 /* queue has been quiesced */
#define QUEUE_FLAG_INLINECRYPT 29 /* inline encryption support */
#define QUEUE_FLAG_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \
(1 << QUEUE_FLAG_SAME_COMP) | \
@ -738,8 +736,6 @@ bool blk_queue_flag_test_and_clear(unsigned int flag, struct request_queue *q);
#define blk_queue_dax(q) test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
#define blk_queue_scsi_passthrough(q) \
test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags)
#define blk_queue_inlinecrypt(q) \
test_bit(QUEUE_FLAG_INLINECRYPT, &(q)->queue_flags)
#define blk_noretry_request(rq) \
((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \
@ -886,24 +882,6 @@ static inline unsigned int blk_queue_depth(struct request_queue *q)
return q->nr_requests;
}
static inline void queue_flag_set_unlocked(unsigned int flag,
struct request_queue *q)
{
if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) &&
kref_read(&q->kobj.kref))
lockdep_assert_held(q->queue_lock);
__set_bit(flag, &q->queue_flags);
}
static inline void queue_flag_clear_unlocked(unsigned int flag,
struct request_queue *q)
{
if (test_bit(QUEUE_FLAG_INIT_DONE, &q->queue_flags) &&
kref_read(&q->kobj.kref))
lockdep_assert_held(q->queue_lock);
__clear_bit(flag, &q->queue_flags);
}
/*
* q->prep_rq_fn return values
*/
@ -1069,11 +1047,6 @@ static inline sector_t blk_rq_pos(const struct request *rq)
return rq->__sector;
}
static inline sector_t blk_rq_dun(const struct request *rq)
{
return rq->__dun;
}
static inline unsigned int blk_rq_bytes(const struct request *rq)
{
return rq->__data_len;

View File

@ -44,7 +44,6 @@ struct bvec_iter {
unsigned int bi_bvec_done; /* number of bytes completed in
current bvec */
u64 bi_dun; /* DUN setting for bio */
};
/*

View File

@ -1896,7 +1896,6 @@ struct super_operations {
void *(*clone_mnt_data) (void *);
void (*copy_mnt_data) (void *, void *);
void (*umount_begin) (struct super_block *);
void (*umount_end)(struct super_block *sb, int flags);
int (*show_options)(struct seq_file *, struct dentry *);
int (*show_options2)(struct vfsmount *,struct seq_file *, struct dentry *);
@ -3122,8 +3121,6 @@ static inline void inode_dio_end(struct inode *inode)
wake_up_bit(&inode->i_state, __I_DIO_WAKEUP);
}
struct inode *dio_bio_get_inode(struct bio *bio);
extern void inode_set_flags(struct inode *inode, unsigned int flags,
unsigned int mask);

View File

@ -20,11 +20,6 @@
#define FS_CRYPTO_BLOCK_SIZE 16
struct fscrypt_ctx;
/* iv sector for security/pfe/pfk_fscrypt.c and f2fs */
#define PG_DUN(i, p) \
(((((u64)(i)->i_ino) & 0xffffffff) << 32) | ((p)->index & 0xffffffff))
struct fscrypt_info;
struct fscrypt_str {
@ -712,33 +707,6 @@ static inline int fscrypt_encrypt_symlink(struct inode *inode,
return 0;
}
/* fscrypt_ice.c */
#ifdef CONFIG_PFK
extern int fscrypt_using_hardware_encryption(const struct inode *inode);
extern void fscrypt_set_ice_dun(const struct inode *inode,
struct bio *bio, u64 dun);
extern void fscrypt_set_ice_skip(struct bio *bio, int bi_crypt_skip);
extern bool fscrypt_mergeable_bio(struct bio *bio, u64 dun, bool bio_encrypted,
int bi_crypt_skip);
#else
static inline int fscrypt_using_hardware_encryption(const struct inode *inode)
{
return 0;
}
static inline void fscrypt_set_ice_dun(const struct inode *inode,
struct bio *bio, u64 dun){}
static inline void fscrypt_set_ice_skip(struct bio *bio, int bi_crypt_skip)
{}
static inline bool fscrypt_mergeable_bio(struct bio *bio,
u64 dun, bool bio_encrypted, int bi_crypt_skip)
{
return true;
}
#endif
/* If *pagep is a bounce page, free it and set *pagep to the pagecache page */
static inline void fscrypt_finalize_bounce_page(struct page **pagep)
{
@ -749,5 +717,4 @@ static inline void fscrypt_finalize_bounce_page(struct page **pagep)
fscrypt_free_bounce_page(page);
}
}
#endif /* _LINUX_FSCRYPT_H */

View File

@ -1516,8 +1516,6 @@ union security_list_options {
size_t *len);
int (*inode_create)(struct inode *dir, struct dentry *dentry,
umode_t mode);
int (*inode_post_create)(struct inode *dir, struct dentry *dentry,
umode_t mode);
int (*inode_link)(struct dentry *old_dentry, struct inode *dir,
struct dentry *new_dentry);
int (*inode_unlink)(struct inode *dir, struct dentry *dentry);
@ -1840,7 +1838,6 @@ struct security_hook_heads {
struct hlist_head inode_free_security;
struct hlist_head inode_init_security;
struct hlist_head inode_create;
struct hlist_head inode_post_create;
struct hlist_head inode_link;
struct hlist_head inode_unlink;
struct hlist_head inode_symlink;

View File

@ -164,7 +164,6 @@ struct mmc_request {
*/
void (*recovery_notifier)(struct mmc_request *);
struct mmc_host *host;
struct request *req;
/* Allow other commands during this ongoing data transfer or busy wait */
bool cap_cmd_during_tfr;

View File

@ -31,7 +31,6 @@
#include <linux/string.h>
#include <linux/mm.h>
#include <linux/fs.h>
#include <linux/bio.h>
struct linux_binprm;
struct cred;
@ -284,8 +283,6 @@ int security_old_inode_init_security(struct inode *inode, struct inode *dir,
const struct qstr *qstr, const char **name,
void **value, size_t *len);
int security_inode_create(struct inode *dir, struct dentry *dentry, umode_t mode);
int security_inode_post_create(struct inode *dir, struct dentry *dentry,
umode_t mode);
int security_inode_link(struct dentry *old_dentry, struct inode *dir,
struct dentry *new_dentry);
int security_inode_unlink(struct inode *dir, struct dentry *dentry);
@ -674,13 +671,6 @@ static inline int security_inode_create(struct inode *dir,
return 0;
}
static inline int security_inode_post_create(struct inode *dir,
struct dentry *dentry,
umode_t mode)
{
return 0;
}
static inline int security_inode_link(struct dentry *old_dentry,
struct inode *dir,
struct dentry *new_dentry)

View File

@ -651,9 +651,6 @@ struct Scsi_Host {
/* The controller does not support WRITE SAME */
unsigned no_write_same:1;
/* Inline encryption support? */
unsigned inlinecrypt_support:1;
unsigned use_blk_mq:1;
unsigned use_cmd_list:1;

View File

@ -283,7 +283,6 @@ struct fsxattr {
#define FS_ENCRYPTION_MODE_SPECK128_256_XTS 7 /* Removed, do not use. */
#define FS_ENCRYPTION_MODE_SPECK128_256_CTS 8 /* Removed, do not use. */
#define FS_ENCRYPTION_MODE_ADIANTUM 9
#define FS_ENCRYPTION_MODE_PRIVATE 127
struct fscrypt_policy {
__u8 version;

View File

@ -6,10 +6,6 @@ menu "Security options"
source security/keys/Kconfig
if ARCH_QCOM
source security/pfe/Kconfig
endif
config SECURITY_DMESG_RESTRICT
bool "Restrict unprivileged access to the kernel syslog"
default n

View File

@ -10,7 +10,6 @@ subdir-$(CONFIG_SECURITY_TOMOYO) += tomoyo
subdir-$(CONFIG_SECURITY_APPARMOR) += apparmor
subdir-$(CONFIG_SECURITY_YAMA) += yama
subdir-$(CONFIG_SECURITY_LOADPIN) += loadpin
subdir-$(CONFIG_ARCH_QCOM) += pfe
# always enable default capabilities
obj-y += commoncap.o
@ -27,7 +26,6 @@ obj-$(CONFIG_SECURITY_APPARMOR) += apparmor/
obj-$(CONFIG_SECURITY_YAMA) += yama/
obj-$(CONFIG_SECURITY_LOADPIN) += loadpin/
obj-$(CONFIG_CGROUP_DEVICE) += device_cgroup.o
obj-$(CONFIG_ARCH_QCOM) += pfe/
# Object integrity file lists
subdir-$(CONFIG_INTEGRITY) += integrity

View File

@ -1,42 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
menu "Qualcomm Technologies, Inc Per File Encryption security device drivers"
depends on ARCH_QCOM
config PFT
bool "Per-File-Tagger driver"
depends on SECURITY
default n
help
This driver is used for tagging enterprise files.
It is part of the Per-File-Encryption (PFE) feature.
The driver is tagging files when created by
registered application.
Tagged files are encrypted using the dm-req-crypt driver.
config PFK
bool "Per-File-Key driver"
depends on SECURITY
depends on SECURITY_SELINUX
default n
help
This driver is used for storing eCryptfs information
in file node.
This is part of eCryptfs hardware enhanced solution
provided by Qualcomm Technologies, Inc.
Information is used when file is encrypted later using
ICE or dm crypto engine
config PFK_WRAPPED_KEY_SUPPORTED
bool "Per-File-Key driver with wrapped key support"
depends on SECURITY
depends on SECURITY_SELINUX
depends on QSEECOM
depends on PFK
default n
help
Adds wrapped key support in PFK driver. Instead of setting
the key directly in ICE, it unwraps the key and sets the key
in ICE.
It ensures the key is protected within a secure environment
and only the wrapped key is present in the kernel.
endmenu

View File

@ -1,7 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
ccflags-y += -Isecurity/selinux -Isecurity/selinux/include
ccflags-y += -Ifs/crypto
ccflags-y += -Idrivers/misc
obj-$(CONFIG_PFT) += pft.o
obj-$(CONFIG_PFK) += pfk.o pfk_kc.o pfk_ice.o pfk_ext4.o pfk_f2fs.o

View File

@ -1,554 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
*/
/*
* Per-File-Key (PFK).
*
* This driver is responsible for overall management of various
* Per File Encryption variants that work on top of or as part of different
* file systems.
*
* The driver has the following purpose :
* 1) Define priorities between PFE's if more than one is enabled
* 2) Extract key information from inode
* 3) Load and manage various keys in ICE HW engine
* 4) It should be invoked from various layers in FS/BLOCK/STORAGE DRIVER
* that need to take decision on HW encryption management of the data
* Some examples:
* BLOCK LAYER: when it takes decision on whether 2 chunks can be united
* to one encryption / decryption request sent to the HW
*
* UFS DRIVER: when it need to configure ICE HW with a particular key slot
* to be used for encryption / decryption
*
* PFE variants can differ on particular way of storing the cryptographic info
* inside inode, actions to be taken upon file operations, etc., but the common
* properties are described above
*
*/
#define pr_fmt(fmt) "pfk [%s]: " fmt, __func__
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/errno.h>
#include <linux/printk.h>
#include <linux/bio.h>
#include <linux/security.h>
#include <crypto/algapi.h>
#include <crypto/ice.h>
#include <linux/pfk.h>
#include "pfk_kc.h"
#include "objsec.h"
#include "pfk_ice.h"
#include "pfk_ext4.h"
#include "pfk_f2fs.h"
#include "pfk_internal.h"
static bool pfk_ready;
/* might be replaced by a table when more than one cipher is supported */
#define PFK_SUPPORTED_KEY_SIZE 32
#define PFK_SUPPORTED_SALT_SIZE 32
/* Various PFE types and function tables to support each one of them */
enum pfe_type {EXT4_CRYPT_PFE, F2FS_CRYPT_PFE, INVALID_PFE};
typedef int (*pfk_parse_inode_type)(const struct bio *bio,
const struct inode *inode,
struct pfk_key_info *key_info,
enum ice_cryto_algo_mode *algo,
bool *is_pfe);
typedef bool (*pfk_allow_merge_bio_type)(const struct bio *bio1,
const struct bio *bio2, const struct inode *inode1,
const struct inode *inode2);
static const pfk_parse_inode_type pfk_parse_inode_ftable[] = {
&pfk_ext4_parse_inode, /* EXT4_CRYPT_PFE */
&pfk_f2fs_parse_inode, /* F2FS_CRYPT_PFE */
};
static const pfk_allow_merge_bio_type pfk_allow_merge_bio_ftable[] = {
&pfk_ext4_allow_merge_bio, /* EXT4_CRYPT_PFE */
&pfk_f2fs_allow_merge_bio, /* F2FS_CRYPT_PFE */
};
static void __exit pfk_exit(void)
{
pfk_ready = false;
pfk_ext4_deinit();
pfk_f2fs_deinit();
pfk_kc_deinit();
}
static int __init pfk_init(void)
{
int ret = 0;
ret = pfk_ext4_init();
if (ret != 0)
goto fail;
ret = pfk_f2fs_init();
if (ret != 0)
goto fail;
pfk_ready = true;
pr_debug("Driver initialized successfully\n");
return 0;
fail:
pr_err("Failed to init driver\n");
return -ENODEV;
}
/*
* If more than one type is supported simultaneously, this function will also
* set the priority between them
*/
static enum pfe_type pfk_get_pfe_type(const struct inode *inode)
{
if (!inode)
return INVALID_PFE;
if (pfk_is_ext4_type(inode))
return EXT4_CRYPT_PFE;
if (pfk_is_f2fs_type(inode))
return F2FS_CRYPT_PFE;
return INVALID_PFE;
}
/**
* inode_to_filename() - get the filename from inode pointer.
* @inode: inode pointer
*
* it is used for debug prints.
*
* Return: filename string or "unknown".
*/
char *inode_to_filename(const struct inode *inode)
{
struct dentry *dentry = NULL;
char *filename = NULL;
if (!inode)
return "NULL";
if (hlist_empty(&inode->i_dentry))
return "unknown";
dentry = hlist_entry(inode->i_dentry.first, struct dentry, d_u.d_alias);
filename = dentry->d_iname;
return filename;
}
/**
* pfk_is_ready() - driver is initialized and ready.
*
* Return: true if the driver is ready.
*/
static inline bool pfk_is_ready(void)
{
return pfk_ready;
}
/**
* pfk_bio_get_inode() - get the inode from a bio.
* @bio: Pointer to BIO structure.
*
* Walk the bio struct links to get the inode.
* Please note, that in general bio may consist of several pages from
* several files, but in our case we always assume that all pages come
* from the same file, since our logic ensures it. That is why we only
* walk through the first page to look for inode.
*
* Return: pointer to the inode struct if successful, or NULL otherwise.
*
*/
static struct inode *pfk_bio_get_inode(const struct bio *bio)
{
if (!bio)
return NULL;
if (!bio_has_data((struct bio *)bio))
return NULL;
if (!bio->bi_io_vec)
return NULL;
if (!bio->bi_io_vec->bv_page)
return NULL;
if (PageAnon(bio->bi_io_vec->bv_page)) {
struct inode *inode;
/* Using direct-io (O_DIRECT) without page cache */
inode = dio_bio_get_inode((struct bio *)bio);
pr_debug("inode on direct-io, inode = 0x%pK.\n", inode);
return inode;
}
if (!page_mapping(bio->bi_io_vec->bv_page))
return NULL;
return page_mapping(bio->bi_io_vec->bv_page)->host;
}
/**
* pfk_key_size_to_key_type() - translate key size to key size enum
* @key_size: key size in bytes
* @key_size_type: pointer to store the output enum (can be null)
*
* return 0 in case of success, error otherwise (i.e not supported key size)
*/
int pfk_key_size_to_key_type(size_t key_size,
enum ice_crpto_key_size *key_size_type)
{
/*
* currently only 32 bit key size is supported
* in the future, table with supported key sizes might
* be introduced
*/
if (key_size != PFK_SUPPORTED_KEY_SIZE) {
pr_err("not supported key size %zu\n", key_size);
return -EINVAL;
}
if (key_size_type)
*key_size_type = ICE_CRYPTO_KEY_SIZE_256;
return 0;
}
/*
* Retrieves filesystem type from inode's superblock
*/
bool pfe_is_inode_filesystem_type(const struct inode *inode,
const char *fs_type)
{
if (!inode || !fs_type)
return false;
if (!inode->i_sb)
return false;
if (!inode->i_sb->s_type)
return false;
return (strcmp(inode->i_sb->s_type->name, fs_type) == 0);
}
/**
* pfk_get_key_for_bio() - get the encryption key to be used for a bio
*
* @bio: pointer to the BIO
* @key_info: pointer to the key information which will be filled in
* @algo_mode: optional pointer to the algorithm identifier which will be set
* @is_pfe: will be set to false if the BIO should be left unencrypted
*
* Return: 0 if a key is being used, otherwise a -errno value
*/
static int pfk_get_key_for_bio(const struct bio *bio,
struct pfk_key_info *key_info,
enum ice_cryto_algo_mode *algo_mode,
bool *is_pfe, unsigned int *data_unit)
{
const struct inode *inode;
enum pfe_type which_pfe;
const struct blk_encryption_key *key = NULL;
char *s_type = NULL;
inode = pfk_bio_get_inode(bio);
which_pfe = pfk_get_pfe_type(inode);
s_type = (char *)pfk_kc_get_storage_type();
/*
* Update dun based on storage type.
* 512 byte dun - For ext4 emmc
* 4K dun - For ext4 ufs, f2fs ufs and f2fs emmc
*/
if (data_unit) {
if (!bio_dun(bio) && !memcmp(s_type, "sdcc", strlen("sdcc")))
*data_unit = 1 << ICE_CRYPTO_DATA_UNIT_512_B;
else
*data_unit = 1 << ICE_CRYPTO_DATA_UNIT_4_KB;
}
if (which_pfe != INVALID_PFE) {
/* Encrypted file; override ->bi_crypt_key */
pr_debug("parsing inode %lu with PFE type %d\n",
inode->i_ino, which_pfe);
return (*(pfk_parse_inode_ftable[which_pfe]))
(bio, inode, key_info, algo_mode, is_pfe);
}
/*
* bio is not for an encrypted file. Use ->bi_crypt_key if it was set.
* Otherwise, don't encrypt/decrypt the bio.
*/
#ifdef CONFIG_DM_DEFAULT_KEY
key = bio->bi_crypt_key;
#endif
if (!key) {
*is_pfe = false;
return -EINVAL;
}
/* Note: the "salt" is really just the second half of the XTS key. */
BUILD_BUG_ON(sizeof(key->raw) !=
PFK_SUPPORTED_KEY_SIZE + PFK_SUPPORTED_SALT_SIZE);
key_info->key = &key->raw[0];
key_info->key_size = PFK_SUPPORTED_KEY_SIZE;
key_info->salt = &key->raw[PFK_SUPPORTED_KEY_SIZE];
key_info->salt_size = PFK_SUPPORTED_SALT_SIZE;
if (algo_mode)
*algo_mode = ICE_CRYPTO_ALGO_MODE_AES_XTS;
return 0;
}
/**
* pfk_load_key_start() - loads PFE encryption key to the ICE
* Can also be invoked from non
* PFE context, in this case it
* is not relevant and is_pfe
* flag is set to false
*
* @bio: Pointer to the BIO structure
* @ice_setting: Pointer to ice setting structure that will be filled with
* ice configuration values, including the index to which the key was loaded
* @is_pfe: will be false if inode is not relevant to PFE, in such a case
* it should be treated as non PFE by the block layer
*
* Returns the index where the key is stored in encryption hw and additional
* information that will be used later for configuration of the encryption hw.
*
* Must be followed by pfk_load_key_end when key is no longer used by ice
*
*/
int pfk_load_key_start(const struct bio *bio, struct ice_device *ice_dev,
struct ice_crypto_setting *ice_setting, bool *is_pfe,
bool async)
{
int ret = 0;
struct pfk_key_info key_info = {NULL, NULL, 0, 0};
enum ice_cryto_algo_mode algo_mode = ICE_CRYPTO_ALGO_MODE_AES_XTS;
enum ice_crpto_key_size key_size_type = 0;
unsigned int data_unit = 1 << ICE_CRYPTO_DATA_UNIT_512_B;
u32 key_index = 0;
if (!is_pfe) {
pr_err("is_pfe is NULL\n");
return -EINVAL;
}
/*
* only a few errors below can indicate that
* this function was not invoked within PFE context,
* otherwise we will consider it PFE
*/
*is_pfe = true;
if (!pfk_is_ready())
return -ENODEV;
if (!ice_setting) {
pr_err("ice setting is NULL\n");
return -EINVAL;
}
ret = pfk_get_key_for_bio(bio, &key_info, &algo_mode, is_pfe,
&data_unit);
if (ret != 0)
return ret;
ret = pfk_key_size_to_key_type(key_info.key_size, &key_size_type);
if (ret != 0)
return ret;
ret = pfk_kc_load_key_start(key_info.key, key_info.key_size,
key_info.salt, key_info.salt_size, &key_index, async,
data_unit, ice_dev);
if (ret) {
if (ret != -EBUSY && ret != -EAGAIN)
pr_err("start: could not load key into pfk key cache, error %d\n",
ret);
return ret;
}
ice_setting->key_size = key_size_type;
ice_setting->algo_mode = algo_mode;
/* hardcoded for now */
ice_setting->key_mode = ICE_CRYPTO_USE_LUT_SW_KEY;
ice_setting->key_index = key_index;
pr_debug("loaded key for file %s key_index %d\n",
inode_to_filename(pfk_bio_get_inode(bio)), key_index);
return 0;
}
/**
* pfk_load_key_end() - marks the PFE key as no longer used by ICE
* Can also be invoked from non
* PFE context, in this case it is not
* relevant and is_pfe flag is
* set to false
*
* @bio: Pointer to the BIO structure
* @is_pfe: Pointer to is_pfe flag, which will be true if function was invoked
* from PFE context
*/
int pfk_load_key_end(const struct bio *bio, struct ice_device *ice_dev,
bool *is_pfe)
{
int ret = 0;
struct pfk_key_info key_info = {NULL, NULL, 0, 0};
if (!is_pfe) {
pr_err("is_pfe is NULL\n");
return -EINVAL;
}
/* only a few errors below can indicate that
* this function was not invoked within PFE context,
* otherwise we will consider it PFE
*/
*is_pfe = true;
if (!pfk_is_ready())
return -ENODEV;
ret = pfk_get_key_for_bio(bio, &key_info, NULL, is_pfe, NULL);
if (ret != 0)
return ret;
pfk_kc_load_key_end(key_info.key, key_info.key_size,
key_info.salt, key_info.salt_size, ice_dev);
pr_debug("finished using key for file %s\n",
inode_to_filename(pfk_bio_get_inode(bio)));
return 0;
}
/**
* pfk_allow_merge_bio() - Check if 2 BIOs can be merged.
* @bio1: Pointer to first BIO structure.
* @bio2: Pointer to second BIO structure.
*
* Prevent merging of BIOs from encrypted and non-encrypted
* files, or files encrypted with different key.
* Also prevent non encrypted and encrypted data from the same file
* to be merged (ecryptfs header if stored inside file should be non
* encrypted)
* This API is called by the file system block layer.
*
* Return: true if the BIOs allowed to be merged, false
* otherwise.
*/
bool pfk_allow_merge_bio(const struct bio *bio1, const struct bio *bio2)
{
const struct blk_encryption_key *key1 = NULL;
const struct blk_encryption_key *key2 = NULL;
const struct inode *inode1;
const struct inode *inode2;
enum pfe_type which_pfe1;
enum pfe_type which_pfe2;
if (!pfk_is_ready())
return false;
if (!bio1 || !bio2)
return false;
if (bio1 == bio2)
return true;
#ifdef CONFIG_DM_DEFAULT_KEY
key1 = bio1->bi_crypt_key;
key2 = bio2->bi_crypt_key;
#endif
inode1 = pfk_bio_get_inode(bio1);
inode2 = pfk_bio_get_inode(bio2);
which_pfe1 = pfk_get_pfe_type(inode1);
which_pfe2 = pfk_get_pfe_type(inode2);
/*
* If one bio is for an encrypted file and the other is for a different
* type of encrypted file or for blocks that are not part of an
* encrypted file, do not merge.
*/
if (which_pfe1 != which_pfe2)
return false;
if (which_pfe1 != INVALID_PFE) {
/* Both bios are for the same type of encrypted file. */
return (*(pfk_allow_merge_bio_ftable[which_pfe1]))(bio1, bio2,
inode1, inode2);
}
/*
* Neither bio is for an encrypted file. Merge only if the default keys
* are the same (or both are NULL).
*/
return key1 == key2 ||
(key1 && key2 &&
!crypto_memneq(key1->raw, key2->raw, sizeof(key1->raw)));
}
int pfk_fbe_clear_key(const unsigned char *key, size_t key_size,
const unsigned char *salt, size_t salt_size)
{
int ret = -EINVAL;
if (!key || !salt)
return ret;
ret = pfk_kc_remove_key_with_salt(key, key_size, salt, salt_size);
if (ret)
pr_err("Clear key error: ret value %d\n", ret);
return ret;
}
/**
* Flush key table on storage core reset. During core reset key configuration
* is lost in ICE. We need to flash the cache, so that the keys will be
* reconfigured again for every subsequent transaction
*/
void pfk_clear_on_reset(struct ice_device *ice_dev)
{
if (!pfk_is_ready())
return;
pfk_kc_clear_on_reset(ice_dev);
}
int pfk_remove(struct ice_device *ice_dev)
{
return pfk_kc_clear(ice_dev);
}
int pfk_initialize_key_table(struct ice_device *ice_dev)
{
return pfk_kc_initialize_key_table(ice_dev);
}
module_init(pfk_init);
module_exit(pfk_exit);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Per-File-Key driver");

View File

@ -1,177 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
*/
/*
* Per-File-Key (PFK) - EXT4
*
* This driver is used for working with EXT4 crypt extension
*
* The key information is stored in node by EXT4 when file is first opened
* and will be later accessed by Block Device Driver to actually load the key
* to encryption hw.
*
* PFK exposes API's for loading and removing keys from encryption hw
* and also API to determine whether 2 adjacent blocks can be agregated by
* Block Layer in one request to encryption hw.
*
*/
#define pr_fmt(fmt) "pfk_ext4 [%s]: " fmt, __func__
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/errno.h>
#include <linux/printk.h>
#include "fscrypt_ice.h"
#include "pfk_ext4.h"
//#include "ext4_ice.h"
static bool pfk_ext4_ready;
/*
* pfk_ext4_deinit() - Deinit function, should be invoked by upper PFK layer
*/
void pfk_ext4_deinit(void)
{
pfk_ext4_ready = false;
}
/*
* pfk_ecryptfs_init() - Init function, should be invoked by upper PFK layer
*/
int __init pfk_ext4_init(void)
{
pfk_ext4_ready = true;
pr_info("PFK EXT4 inited successfully\n");
return 0;
}
/**
* pfk_ecryptfs_is_ready() - driver is initialized and ready.
*
* Return: true if the driver is ready.
*/
static inline bool pfk_ext4_is_ready(void)
{
return pfk_ext4_ready;
}
/**
* pfk_is_ext4_type() - return true if inode belongs to ICE EXT4 PFE
* @inode: inode pointer
*/
bool pfk_is_ext4_type(const struct inode *inode)
{
if (!pfe_is_inode_filesystem_type(inode, "ext4"))
return false;
return fscrypt_should_be_processed_by_ice(inode);
}
/**
* pfk_ext4_parse_cipher() - parse cipher from inode to enum
* @inode: inode
* @algo: pointer to store the output enum (can be null)
*
* return 0 in case of success, error otherwise (i.e not supported cipher)
*/
static int pfk_ext4_parse_cipher(const struct inode *inode,
enum ice_cryto_algo_mode *algo)
{
/*
* currently only AES XTS algo is supported
* in the future, table with supported ciphers might
* be introduced
*/
if (!inode)
return -EINVAL;
if (!fscrypt_is_aes_xts_cipher(inode)) {
pr_err("ext4 alghoritm is not supported by pfk\n");
return -EINVAL;
}
if (algo)
*algo = ICE_CRYPTO_ALGO_MODE_AES_XTS;
return 0;
}
int pfk_ext4_parse_inode(const struct bio *bio,
const struct inode *inode,
struct pfk_key_info *key_info,
enum ice_cryto_algo_mode *algo,
bool *is_pfe)
{
int ret = 0;
if (!is_pfe)
return -EINVAL;
/*
* only a few errors below can indicate that
* this function was not invoked within PFE context,
* otherwise we will consider it PFE
*/
*is_pfe = true;
if (!pfk_ext4_is_ready())
return -ENODEV;
if (!inode)
return -EINVAL;
if (!key_info)
return -EINVAL;
key_info->key = fscrypt_get_ice_encryption_key(inode);
if (!key_info->key) {
pr_err("could not parse key from ext4\n");
return -EINVAL;
}
key_info->key_size = fscrypt_get_ice_encryption_key_size(inode);
if (!key_info->key_size) {
pr_err("could not parse key size from ext4\n");
return -EINVAL;
}
key_info->salt = fscrypt_get_ice_encryption_salt(inode);
if (!key_info->salt) {
pr_err("could not parse salt from ext4\n");
return -EINVAL;
}
key_info->salt_size = fscrypt_get_ice_encryption_salt_size(inode);
if (!key_info->salt_size) {
pr_err("could not parse salt size from ext4\n");
return -EINVAL;
}
ret = pfk_ext4_parse_cipher(inode, algo);
if (ret != 0) {
pr_err("not supported cipher\n");
return ret;
}
return 0;
}
bool pfk_ext4_allow_merge_bio(const struct bio *bio1,
const struct bio *bio2, const struct inode *inode1,
const struct inode *inode2)
{
/* if there is no ext4 pfk, don't disallow merging blocks */
if (!pfk_ext4_is_ready())
return true;
if (!inode1 || !inode2)
return false;
return fscrypt_is_ice_encryption_info_equal(inode1, inode2);
}

View File

@ -1,30 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
*/
#ifndef _PFK_EXT4_H_
#define _PFK_EXT4_H_
#include <linux/types.h>
#include <linux/fs.h>
#include <crypto/ice.h>
#include "pfk_internal.h"
bool pfk_is_ext4_type(const struct inode *inode);
int pfk_ext4_parse_inode(const struct bio *bio,
const struct inode *inode,
struct pfk_key_info *key_info,
enum ice_cryto_algo_mode *algo,
bool *is_pfe);
bool pfk_ext4_allow_merge_bio(const struct bio *bio1,
const struct bio *bio2, const struct inode *inode1,
const struct inode *inode2);
int __init pfk_ext4_init(void);
void pfk_ext4_deinit(void);
#endif /* _PFK_EXT4_H_ */

View File

@ -1,188 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
*/
/*
* Per-File-Key (PFK) - f2fs
*
* This driver is used for working with EXT4/F2FS crypt extension
*
* The key information is stored in node by EXT4/F2FS when file is first opened
* and will be later accessed by Block Device Driver to actually load the key
* to encryption hw.
*
* PFK exposes API's for loading and removing keys from encryption hw
* and also API to determine whether 2 adjacent blocks can be agregated by
* Block Layer in one request to encryption hw.
*
*/
#define pr_fmt(fmt) "pfk_f2fs [%s]: " fmt, __func__
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/errno.h>
#include <linux/printk.h>
#include "fscrypt_ice.h"
#include "pfk_f2fs.h"
static bool pfk_f2fs_ready;
/*
* pfk_f2fs_deinit() - Deinit function, should be invoked by upper PFK layer
*/
void pfk_f2fs_deinit(void)
{
pfk_f2fs_ready = false;
}
/*
* pfk_f2fs_init() - Init function, should be invoked by upper PFK layer
*/
int __init pfk_f2fs_init(void)
{
pfk_f2fs_ready = true;
pr_info("PFK F2FS inited successfully\n");
return 0;
}
/**
* pfk_f2fs_is_ready() - driver is initialized and ready.
*
* Return: true if the driver is ready.
*/
static inline bool pfk_f2fs_is_ready(void)
{
return pfk_f2fs_ready;
}
/**
* pfk_is_f2fs_type() - return true if inode belongs to ICE F2FS PFE
* @inode: inode pointer
*/
bool pfk_is_f2fs_type(const struct inode *inode)
{
if (!pfe_is_inode_filesystem_type(inode, "f2fs"))
return false;
return fscrypt_should_be_processed_by_ice(inode);
}
/**
* pfk_f2fs_parse_cipher() - parse cipher from inode to enum
* @inode: inode
* @algo: pointer to store the output enum (can be null)
*
* return 0 in case of success, error otherwise (i.e not supported cipher)
*/
static int pfk_f2fs_parse_cipher(const struct inode *inode,
enum ice_cryto_algo_mode *algo)
{
/*
* currently only AES XTS algo is supported
* in the future, table with supported ciphers might
* be introduced
*/
if (!inode)
return -EINVAL;
if (!fscrypt_is_aes_xts_cipher(inode)) {
pr_err("f2fs alghoritm is not supported by pfk\n");
return -EINVAL;
}
if (algo)
*algo = ICE_CRYPTO_ALGO_MODE_AES_XTS;
return 0;
}
int pfk_f2fs_parse_inode(const struct bio *bio,
const struct inode *inode,
struct pfk_key_info *key_info,
enum ice_cryto_algo_mode *algo,
bool *is_pfe)
{
int ret = 0;
if (!is_pfe)
return -EINVAL;
/*
* only a few errors below can indicate that
* this function was not invoked within PFE context,
* otherwise we will consider it PFE
*/
*is_pfe = true;
if (!pfk_f2fs_is_ready())
return -ENODEV;
if (!inode)
return -EINVAL;
if (!key_info)
return -EINVAL;
key_info->key = fscrypt_get_ice_encryption_key(inode);
if (!key_info->key) {
pr_err("could not parse key from f2fs\n");
return -EINVAL;
}
key_info->key_size = fscrypt_get_ice_encryption_key_size(inode);
if (!key_info->key_size) {
pr_err("could not parse key size from f2fs\n");
return -EINVAL;
}
key_info->salt = fscrypt_get_ice_encryption_salt(inode);
if (!key_info->salt) {
pr_err("could not parse salt from f2fs\n");
return -EINVAL;
}
key_info->salt_size = fscrypt_get_ice_encryption_salt_size(inode);
if (!key_info->salt_size) {
pr_err("could not parse salt size from f2fs\n");
return -EINVAL;
}
ret = pfk_f2fs_parse_cipher(inode, algo);
if (ret != 0) {
pr_err("not supported cipher\n");
return ret;
}
return 0;
}
bool pfk_f2fs_allow_merge_bio(const struct bio *bio1,
const struct bio *bio2, const struct inode *inode1,
const struct inode *inode2)
{
bool mergeable;
/* if there is no f2fs pfk, don't disallow merging blocks */
if (!pfk_f2fs_is_ready())
return true;
if (!inode1 || !inode2)
return false;
mergeable = fscrypt_is_ice_encryption_info_equal(inode1, inode2);
if (!mergeable)
return false;
/* ICE allows only consecutive iv_key stream. */
if (!bio_dun(bio1) && !bio_dun(bio2))
return true;
else if (!bio_dun(bio1) || !bio_dun(bio2))
return false;
return bio_end_dun(bio1) == bio_dun(bio2);
}

View File

@ -1,30 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
*/
#ifndef _PFK_F2FS_H_
#define _PFK_F2FS_H_
#include <linux/types.h>
#include <linux/fs.h>
#include <crypto/ice.h>
#include "pfk_internal.h"
bool pfk_is_f2fs_type(const struct inode *inode);
int pfk_f2fs_parse_inode(const struct bio *bio,
const struct inode *inode,
struct pfk_key_info *key_info,
enum ice_cryto_algo_mode *algo,
bool *is_pfe);
bool pfk_f2fs_allow_merge_bio(const struct bio *bio1,
const struct bio *bio2, const struct inode *inode1,
const struct inode *inode2);
int __init pfk_f2fs_init(void);
void pfk_f2fs_deinit(void);
#endif /* _PFK_F2FS_H_ */

View File

@ -1,205 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/errno.h>
#include <linux/io.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/async.h>
#include <linux/mm.h>
#include <linux/of.h>
#include <linux/device-mapper.h>
#include <soc/qcom/scm.h>
#include <soc/qcom/qseecomi.h>
#include <soc/qcom/qtee_shmbridge.h>
#include <crypto/ice.h>
#include "pfk_ice.h"
/**********************************/
/** global definitions **/
/**********************************/
#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE 0x5
#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE 0x6
/* index 0 and 1 is reserved for FDE */
#define MIN_ICE_KEY_INDEX 2
#define MAX_ICE_KEY_INDEX 31
#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_ID \
TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_SIP, TZ_SVC_ES, \
TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE)
#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_ID \
TZ_SYSCALL_CREATE_SMC_ID(TZ_OWNER_SIP, \
TZ_SVC_ES, TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE)
#define TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_PARAM_ID \
TZ_SYSCALL_CREATE_PARAM_ID_2( \
TZ_SYSCALL_PARAM_TYPE_VAL, TZ_SYSCALL_PARAM_TYPE_VAL)
#define TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_PARAM_ID \
TZ_SYSCALL_CREATE_PARAM_ID_6( \
TZ_SYSCALL_PARAM_TYPE_VAL, \
TZ_SYSCALL_PARAM_TYPE_BUF_RW, TZ_SYSCALL_PARAM_TYPE_VAL, \
TZ_SYSCALL_PARAM_TYPE_VAL, TZ_SYSCALL_PARAM_TYPE_VAL, \
TZ_SYSCALL_PARAM_TYPE_VAL)
#define CONTEXT_SIZE 0x1000
#define ICE_BUFFER_SIZE 64
#define PFK_UFS "ufs"
#define PFK_SDCC "sdcc"
#define PFK_UFS_CARD "ufscard"
#define UFS_CE 10
#define SDCC_CE 20
#define UFS_CARD_CE 30
enum {
ICE_CIPHER_MODE_XTS_128 = 0,
ICE_CIPHER_MODE_CBC_128 = 1,
ICE_CIPHER_MODE_XTS_256 = 3,
ICE_CIPHER_MODE_CBC_256 = 4
};
static int set_key(uint32_t index, const uint8_t *key, const uint8_t *salt,
unsigned int data_unit, struct ice_device *ice_dev)
{
struct scm_desc desc = {0};
int ret = 0;
uint32_t smc_id = 0;
char *tzbuf = NULL;
uint32_t key_size = ICE_BUFFER_SIZE / 2;
struct qtee_shm shm;
ret = qtee_shmbridge_allocate_shm(ICE_BUFFER_SIZE, &shm);
if (ret)
return -ENOMEM;
tzbuf = shm.vaddr;
memcpy(tzbuf, key, key_size);
memcpy(tzbuf+key_size, salt, key_size);
dmac_flush_range(tzbuf, tzbuf + ICE_BUFFER_SIZE);
smc_id = TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_ID;
desc.arginfo = TZ_ES_CONFIG_SET_ICE_KEY_CE_TYPE_PARAM_ID;
desc.args[0] = index;
desc.args[1] = shm.paddr;
desc.args[2] = shm.size;
desc.args[3] = ICE_CIPHER_MODE_XTS_256;
desc.args[4] = data_unit;
if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS_CARD))
desc.args[5] = UFS_CARD_CE;
else if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_SDCC))
desc.args[5] = SDCC_CE;
else if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS))
desc.args[5] = UFS_CE;
ret = scm_call2_noretry(smc_id, &desc);
if (ret)
pr_err("%s:SCM call Error: 0x%x\n", __func__, ret);
qtee_shmbridge_free_shm(&shm);
return ret;
}
static int clear_key(uint32_t index, struct ice_device *ice_dev)
{
struct scm_desc desc = {0};
int ret = 0;
uint32_t smc_id = 0;
smc_id = TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_ID;
desc.arginfo = TZ_ES_INVALIDATE_ICE_KEY_CE_TYPE_PARAM_ID;
desc.args[0] = index;
if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS_CARD))
desc.args[1] = UFS_CARD_CE;
else if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_SDCC))
desc.args[1] = SDCC_CE;
else if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS))
desc.args[1] = UFS_CE;
ret = scm_call2_noretry(smc_id, &desc);
if (ret)
pr_err("%s:SCM call Error: 0x%x\n", __func__, ret);
return ret;
}
int qti_pfk_ice_set_key(uint32_t index, uint8_t *key, uint8_t *salt,
struct ice_device *ice_dev, unsigned int data_unit)
{
int ret = 0, ret1 = 0;
if (index < MIN_ICE_KEY_INDEX || index > MAX_ICE_KEY_INDEX) {
pr_err("%s Invalid index %d\n", __func__, index);
return -EINVAL;
}
if (!key || !salt) {
pr_err("%s Invalid key/salt\n", __func__);
return -EINVAL;
}
ret = enable_ice_setup(ice_dev);
if (ret) {
pr_err("%s: could not enable clocks: %d\n", __func__, ret);
goto out;
}
ret = set_key(index, key, salt, data_unit, ice_dev);
if (ret) {
pr_err("%s: Set Key Error: %d\n", __func__, ret);
if (ret == -EBUSY) {
if (disable_ice_setup(ice_dev))
pr_err("%s: clock disable failed\n", __func__);
goto out;
}
/* Try to invalidate the key to keep ICE in proper state */
ret1 = clear_key(index, ice_dev);
if (ret1)
pr_err("%s: Invalidate key error: %d\n", __func__, ret);
}
ret1 = disable_ice_setup(ice_dev);
if (ret)
pr_err("%s: Error %d disabling clocks\n", __func__, ret);
out:
return ret;
}
int qti_pfk_ice_invalidate_key(uint32_t index, struct ice_device *ice_dev)
{
int ret = 0;
if (index < MIN_ICE_KEY_INDEX || index > MAX_ICE_KEY_INDEX) {
pr_err("%s Invalid index %d\n", __func__, index);
return -EINVAL;
}
ret = enable_ice_setup(ice_dev);
if (ret) {
pr_err("%s: could not enable clocks: 0x%x\n", __func__, ret);
return ret;
}
ret = clear_key(index, ice_dev);
if (ret)
pr_err("%s: Invalidate key error: %d\n", __func__, ret);
if (disable_ice_setup(ice_dev))
pr_err("%s: could not disable clocks\n", __func__);
return ret;
}

View File

@ -1,23 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2018-2019, The Linux Foundation. All rights reserved.
*/
#ifndef PFK_ICE_H_
#define PFK_ICE_H_
/*
* PFK ICE
*
* ICE keys configuration through scm calls.
*
*/
#include <linux/types.h>
#include <crypto/ice.h>
int qti_pfk_ice_set_key(uint32_t index, uint8_t *key, uint8_t *salt,
struct ice_device *ice_dev, unsigned int data_unit);
int qti_pfk_ice_invalidate_key(uint32_t index, struct ice_device *ice_dev);
#endif /* PFK_ICE_H_ */

View File

@ -1,27 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
*/
#ifndef _PFK_INTERNAL_H_
#define _PFK_INTERNAL_H_
#include <linux/types.h>
#include <crypto/ice.h>
struct pfk_key_info {
const unsigned char *key;
const unsigned char *salt;
size_t key_size;
size_t salt_size;
};
int pfk_key_size_to_key_type(size_t key_size,
enum ice_crpto_key_size *key_size_type);
bool pfe_is_inode_filesystem_type(const struct inode *inode,
const char *fs_type);
char *inode_to_filename(const struct inode *inode);
#endif /* _PFK_INTERNAL_H_ */

View File

@ -1,870 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
*/
/*
* PFK Key Cache
*
* Key Cache used internally in PFK.
* The purpose of the cache is to save access time to QSEE when loading keys.
* Currently the cache is the same size as the total number of keys that can
* be loaded to ICE. Since this number is relatively small, the algorithms for
* cache eviction are simple, linear and based on last usage timestamp, i.e
* the node that will be evicted is the one with the oldest timestamp.
* Empty entries always have the oldest timestamp.
*/
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/spinlock.h>
#include <linux/errno.h>
#include <linux/string.h>
#include <linux/jiffies.h>
#include <linux/slab.h>
#include <linux/printk.h>
#include <linux/sched/signal.h>
#include "pfk_kc.h"
#include "pfk_ice.h"
/** the first available index in ice engine */
#define PFK_KC_STARTING_INDEX 2
/** currently the only supported key and salt sizes */
#define PFK_KC_KEY_SIZE 32
#define PFK_KC_SALT_SIZE 32
/** Table size */
#define PFK_KC_TABLE_SIZE ((32) - (PFK_KC_STARTING_INDEX))
/** The maximum key and salt size */
#define PFK_MAX_KEY_SIZE PFK_KC_KEY_SIZE
#define PFK_MAX_SALT_SIZE PFK_KC_SALT_SIZE
#define PFK_UFS "ufs"
#define PFK_UFS_CARD "ufscard"
static DEFINE_SPINLOCK(kc_lock);
static unsigned long flags;
static bool kc_ready;
static char *s_type = "sdcc";
/**
* enum pfk_kc_entry_state - state of the entry inside kc table
*
* @FREE: entry is free
* @ACTIVE_ICE_PRELOAD: entry is actively used by ICE engine
and cannot be used by others. SCM call
to load key to ICE is pending to be performed
* @ACTIVE_ICE_LOADED: entry is actively used by ICE engine and
cannot be used by others. SCM call to load the
key to ICE was successfully executed and key is
now loaded
* @INACTIVE_INVALIDATING: entry is being invalidated during file close
and cannot be used by others until invalidation
is complete
* @INACTIVE: entry's key is already loaded, but is not
currently being used. It can be re-used for
optimization and to avoid SCM call cost or
it can be taken by another key if there are
no FREE entries
* @SCM_ERROR: error occurred while scm call was performed to
load the key to ICE
*/
enum pfk_kc_entry_state {
FREE,
ACTIVE_ICE_PRELOAD,
ACTIVE_ICE_LOADED,
INACTIVE_INVALIDATING,
INACTIVE,
SCM_ERROR
};
struct kc_entry {
unsigned char key[PFK_MAX_KEY_SIZE];
size_t key_size;
unsigned char salt[PFK_MAX_SALT_SIZE];
size_t salt_size;
u64 time_stamp;
u32 key_index;
struct task_struct *thread_pending;
enum pfk_kc_entry_state state;
/* ref count for the number of requests in the HW queue for this key */
int loaded_ref_cnt;
int scm_error;
};
/**
* kc_is_ready() - driver is initialized and ready.
*
* Return: true if the key cache is ready.
*/
static inline bool kc_is_ready(void)
{
return kc_ready;
}
static inline void kc_spin_lock(void)
{
spin_lock_irqsave(&kc_lock, flags);
}
static inline void kc_spin_unlock(void)
{
spin_unlock_irqrestore(&kc_lock, flags);
}
/**
* pfk_kc_get_storage_type() - return the hardware storage type.
*
* Return: storage type queried during bootup.
*/
const char *pfk_kc_get_storage_type(void)
{
return s_type;
}
/**
* kc_entry_is_available() - checks whether the entry is available
*
* Return true if it is , false otherwise or if invalid
* Should be invoked under spinlock
*/
static bool kc_entry_is_available(const struct kc_entry *entry)
{
if (!entry)
return false;
return (entry->state == FREE || entry->state == INACTIVE);
}
/**
* kc_entry_wait_till_available() - waits till entry is available
*
* Returns 0 in case of success or -ERESTARTSYS if the wait was interrupted
* by signal
*
* Should be invoked under spinlock
*/
static int kc_entry_wait_till_available(struct kc_entry *entry)
{
int res = 0;
while (!kc_entry_is_available(entry)) {
set_current_state(TASK_INTERRUPTIBLE);
if (signal_pending(current)) {
res = -ERESTARTSYS;
break;
}
/* assuming only one thread can try to invalidate
* the same entry
*/
entry->thread_pending = current;
kc_spin_unlock();
schedule();
kc_spin_lock();
}
set_current_state(TASK_RUNNING);
return res;
}
/**
* kc_entry_start_invalidating() - moves entry to state
* INACTIVE_INVALIDATING
* If entry is in use, waits till
* it gets available
* @entry: pointer to entry
*
* Return 0 in case of success, otherwise error
* Should be invoked under spinlock
*/
static int kc_entry_start_invalidating(struct kc_entry *entry)
{
int res;
res = kc_entry_wait_till_available(entry);
if (res)
return res;
entry->state = INACTIVE_INVALIDATING;
return 0;
}
/**
* kc_entry_finish_invalidating() - moves entry to state FREE
* wakes up all the tasks waiting
* on it
*
* @entry: pointer to entry
*
* Return 0 in case of success, otherwise error
* Should be invoked under spinlock
*/
static void kc_entry_finish_invalidating(struct kc_entry *entry)
{
if (!entry)
return;
if (entry->state != INACTIVE_INVALIDATING)
return;
entry->state = FREE;
}
/**
* kc_min_entry() - compare two entries to find one with minimal time
* @a: ptr to the first entry. If NULL the other entry will be returned
* @b: pointer to the second entry
*
* Return the entry which timestamp is the minimal, or b if a is NULL
*/
static inline struct kc_entry *kc_min_entry(struct kc_entry *a,
struct kc_entry *b)
{
if (!a)
return b;
if (time_before64(b->time_stamp, a->time_stamp))
return b;
return a;
}
/**
* kc_entry_at_index() - return entry at specific index
* @index: index of entry to be accessed
*
* Return entry
* Should be invoked under spinlock
*/
static struct kc_entry *kc_entry_at_index(int index,
struct ice_device *ice_dev)
{
return (struct kc_entry *)(ice_dev->key_table) + index;
}
/**
* kc_find_key_at_index() - find kc entry starting at specific index
* @key: key to look for
* @key_size: the key size
* @salt: salt to look for
* @salt_size: the salt size
* @sarting_index: index to start search with, if entry found, updated with
* index of that entry
*
* Return entry or NULL in case of error
* Should be invoked under spinlock
*/
static struct kc_entry *kc_find_key_at_index(const unsigned char *key,
size_t key_size, const unsigned char *salt, size_t salt_size,
struct ice_device *ice_dev, int *starting_index)
{
struct kc_entry *entry = NULL;
int i = 0;
for (i = *starting_index; i < PFK_KC_TABLE_SIZE; i++) {
entry = kc_entry_at_index(i, ice_dev);
if (salt != NULL) {
if (entry->salt_size != salt_size)
continue;
if (memcmp(entry->salt, salt, salt_size) != 0)
continue;
}
if (entry->key_size != key_size)
continue;
if (memcmp(entry->key, key, key_size) == 0) {
*starting_index = i;
return entry;
}
}
return NULL;
}
/**
* kc_find_key() - find kc entry
* @key: key to look for
* @key_size: the key size
* @salt: salt to look for
* @salt_size: the salt size
*
* Return entry or NULL in case of error
* Should be invoked under spinlock
*/
static struct kc_entry *kc_find_key(const unsigned char *key, size_t key_size,
const unsigned char *salt, size_t salt_size,
struct ice_device *ice_dev)
{
int index = 0;
return kc_find_key_at_index(key, key_size, salt, salt_size,
ice_dev, &index);
}
/**
* kc_find_oldest_entry_non_locked() - finds the entry with minimal timestamp
* that is not locked
*
* Returns entry with minimal timestamp. Empty entries have timestamp
* of 0, therefore they are returned first.
* If all the entries are locked, will return NULL
* Should be invoked under spin lock
*/
static struct kc_entry *kc_find_oldest_entry_non_locked(
struct ice_device *ice_dev)
{
struct kc_entry *curr_min_entry = NULL;
struct kc_entry *entry = NULL;
int i = 0;
for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
entry = kc_entry_at_index(i, ice_dev);
if (entry->state == FREE)
return entry;
if (entry->state == INACTIVE)
curr_min_entry = kc_min_entry(curr_min_entry, entry);
}
return curr_min_entry;
}
/**
* kc_update_timestamp() - updates timestamp of entry to current
*
* @entry: entry to update
*
*/
static void kc_update_timestamp(struct kc_entry *entry)
{
if (!entry)
return;
entry->time_stamp = get_jiffies_64();
}
/**
* kc_clear_entry() - clear the key from entry and mark entry not in use
*
* @entry: pointer to entry
*
* Should be invoked under spinlock
*/
static void kc_clear_entry(struct kc_entry *entry)
{
if (!entry)
return;
memset(entry->key, 0, entry->key_size);
memset(entry->salt, 0, entry->salt_size);
entry->key_size = 0;
entry->salt_size = 0;
entry->time_stamp = 0;
entry->scm_error = 0;
entry->state = FREE;
entry->loaded_ref_cnt = 0;
entry->thread_pending = NULL;
}
/**
* kc_update_entry() - replaces the key in given entry and
* loads the new key to ICE
*
* @entry: entry to replace key in
* @key: key
* @key_size: key_size
* @salt: salt
* @salt_size: salt_size
* @data_unit: dun size
*
* The previous key is securely released and wiped, the new one is loaded
* to ICE.
* Should be invoked under spinlock
* Caller to validate that key/salt_size matches the size in struct kc_entry
*/
static int kc_update_entry(struct kc_entry *entry, const unsigned char *key,
size_t key_size, const unsigned char *salt, size_t salt_size,
unsigned int data_unit, struct ice_device *ice_dev)
{
int ret;
kc_clear_entry(entry);
memcpy(entry->key, key, key_size);
entry->key_size = key_size;
memcpy(entry->salt, salt, salt_size);
entry->salt_size = salt_size;
/* Mark entry as no longer free before releasing the lock */
entry->state = ACTIVE_ICE_PRELOAD;
kc_spin_unlock();
ret = qti_pfk_ice_set_key(entry->key_index, entry->key,
entry->salt, ice_dev, data_unit);
kc_spin_lock();
return ret;
}
/**
* pfk_kc_init() - init function
*
* Return 0 in case of success, error otherwise
*/
static int pfk_kc_init(struct ice_device *ice_dev)
{
int i = 0;
struct kc_entry *entry = NULL;
kc_spin_lock();
for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
entry = kc_entry_at_index(i, ice_dev);
entry->key_index = PFK_KC_STARTING_INDEX + i;
}
kc_ready = true;
kc_spin_unlock();
return 0;
}
/**
* pfk_kc_denit() - deinit function
*
* Return 0 in case of success, error otherwise
*/
int pfk_kc_deinit(void)
{
kc_ready = false;
return 0;
}
/**
* pfk_kc_load_key_start() - retrieve the key from cache or add it if
* it's not there and return the ICE hw key index in @key_index.
* @key: pointer to the key
* @key_size: the size of the key
* @salt: pointer to the salt
* @salt_size: the size of the salt
* @key_index: the pointer to key_index where the output will be stored
* @async: whether scm calls are allowed in the caller context
*
* If key is present in cache, than the key_index will be retrieved from cache.
* If it is not present, the oldest entry from kc table will be evicted,
* the key will be loaded to ICE via QSEE to the index that is the evicted
* entry number and stored in cache.
* Entry that is going to be used is marked as being used, it will mark
* as not being used when ICE finishes using it and pfk_kc_load_key_end
* will be invoked.
* As QSEE calls can only be done from a non-atomic context, when @async flag
* is set to 'false', it specifies that it is ok to make the calls in the
* current context. Otherwise, when @async is set, the caller should retry the
* call again from a different context, and -EAGAIN error will be returned.
*
* Return 0 in case of success, error otherwise
*/
int pfk_kc_load_key_start(const unsigned char *key, size_t key_size,
const unsigned char *salt, size_t salt_size, u32 *key_index,
bool async, unsigned int data_unit, struct ice_device *ice_dev)
{
int ret = 0;
struct kc_entry *entry = NULL;
bool entry_exists = false;
if (!kc_is_ready())
return -ENODEV;
if (!key || !salt || !key_index) {
pr_err("%s key/salt/key_index NULL\n", __func__);
return -EINVAL;
}
if (key_size != PFK_KC_KEY_SIZE) {
pr_err("unsupported key size %zu\n", key_size);
return -EINVAL;
}
if (salt_size != PFK_KC_SALT_SIZE) {
pr_err("unsupported salt size %zu\n", salt_size);
return -EINVAL;
}
kc_spin_lock();
entry = kc_find_key(key, key_size, salt, salt_size, ice_dev);
if (!entry) {
if (async) {
pr_debug("%s task will populate entry\n", __func__);
kc_spin_unlock();
return -EAGAIN;
}
entry = kc_find_oldest_entry_non_locked(ice_dev);
if (!entry) {
/* could not find a single non locked entry,
* return EBUSY to upper layers so that the
* request will be rescheduled
*/
kc_spin_unlock();
return -EBUSY;
}
} else {
entry_exists = true;
}
pr_debug("entry with index %d is in state %d\n",
entry->key_index, entry->state);
switch (entry->state) {
case (INACTIVE):
if (entry_exists) {
kc_update_timestamp(entry);
entry->state = ACTIVE_ICE_LOADED;
if (!strcmp(ice_dev->ice_instance_type,
(char *)PFK_UFS) ||
!strcmp(ice_dev->ice_instance_type,
(char *)PFK_UFS_CARD)) {
if (async)
entry->loaded_ref_cnt++;
} else {
entry->loaded_ref_cnt++;
}
break;
}
case (FREE):
ret = kc_update_entry(entry, key, key_size, salt, salt_size,
data_unit, ice_dev);
if (ret) {
entry->state = SCM_ERROR;
entry->scm_error = ret;
pr_err("%s: key load error (%d)\n", __func__, ret);
} else {
kc_update_timestamp(entry);
entry->state = ACTIVE_ICE_LOADED;
/*
* In case of UFS only increase ref cnt for async calls,
* sync calls from within work thread do not pass
* requests further to HW
*/
if (!strcmp(ice_dev->ice_instance_type,
(char *)PFK_UFS) ||
!strcmp(ice_dev->ice_instance_type,
(char *)PFK_UFS_CARD)) {
if (async)
entry->loaded_ref_cnt++;
} else {
entry->loaded_ref_cnt++;
}
}
break;
case (ACTIVE_ICE_PRELOAD):
case (INACTIVE_INVALIDATING):
ret = -EAGAIN;
break;
case (ACTIVE_ICE_LOADED):
kc_update_timestamp(entry);
if (!strcmp(ice_dev->ice_instance_type, (char *)PFK_UFS) ||
!strcmp(ice_dev->ice_instance_type,
(char *)PFK_UFS_CARD)) {
if (async)
entry->loaded_ref_cnt++;
} else {
entry->loaded_ref_cnt++;
}
break;
case(SCM_ERROR):
ret = entry->scm_error;
kc_clear_entry(entry);
entry->state = FREE;
break;
default:
pr_err("invalid state %d for entry with key index %d\n",
entry->state, entry->key_index);
ret = -EINVAL;
}
*key_index = entry->key_index;
kc_spin_unlock();
return ret;
}
/**
* pfk_kc_load_key_end() - finish the process of key loading that was started
* by pfk_kc_load_key_start
* by marking the entry as not
* being in use
* @key: pointer to the key
* @key_size: the size of the key
* @salt: pointer to the salt
* @salt_size: the size of the salt
*
*/
void pfk_kc_load_key_end(const unsigned char *key, size_t key_size,
const unsigned char *salt, size_t salt_size,
struct ice_device *ice_dev)
{
struct kc_entry *entry = NULL;
struct task_struct *tmp_pending = NULL;
int ref_cnt = 0;
if (!kc_is_ready())
return;
if (!key || !salt)
return;
if (key_size != PFK_KC_KEY_SIZE)
return;
if (salt_size != PFK_KC_SALT_SIZE)
return;
kc_spin_lock();
entry = kc_find_key(key, key_size, salt, salt_size, ice_dev);
if (!entry) {
kc_spin_unlock();
pr_err("internal error, there should an entry to unlock\n");
return;
}
ref_cnt = --entry->loaded_ref_cnt;
if (ref_cnt < 0)
pr_err("internal error, ref count should never be negative\n");
if (!ref_cnt) {
entry->state = INACTIVE;
/*
* wake-up invalidation if it's waiting
* for the entry to be released
*/
if (entry->thread_pending) {
tmp_pending = entry->thread_pending;
entry->thread_pending = NULL;
kc_spin_unlock();
wake_up_process(tmp_pending);
return;
}
}
kc_spin_unlock();
}
/**
* pfk_kc_remove_key_with_salt() - remove the key and salt from cache
* and from ICE engine.
* @key: pointer to the key
* @key_size: the size of the key
* @salt: pointer to the key
* @salt_size: the size of the key
*
* Return 0 in case of success, error otherwise (also in case of non
* (existing key)
*/
int pfk_kc_remove_key_with_salt(const unsigned char *key, size_t key_size,
const unsigned char *salt, size_t salt_size)
{
struct kc_entry *entry = NULL;
struct list_head *ice_dev_list = NULL;
struct ice_device *ice_dev;
int res = 0;
if (!kc_is_ready())
return -ENODEV;
if (!key)
return -EINVAL;
if (!salt)
return -EINVAL;
if (key_size != PFK_KC_KEY_SIZE)
return -EINVAL;
if (salt_size != PFK_KC_SALT_SIZE)
return -EINVAL;
kc_spin_lock();
ice_dev_list = get_ice_dev_list();
if (!ice_dev_list) {
pr_err("%s: Did not find ICE device head\n", __func__);
return -ENODEV;
}
list_for_each_entry(ice_dev, ice_dev_list, list) {
entry = kc_find_key(key, key_size, salt, salt_size, ice_dev);
if (entry) {
pr_debug("%s: Found entry for ice_dev number %d\n",
__func__, ice_dev->device_no);
break;
}
pr_debug("%s: Can't find entry for ice_dev number %d\n",
__func__, ice_dev->device_no);
}
if (!entry) {
pr_debug("%s: Cannot find entry for any ice device\n",
__func__);
kc_spin_unlock();
return -EINVAL;
}
res = kc_entry_start_invalidating(entry);
if (res != 0) {
kc_spin_unlock();
return res;
}
kc_clear_entry(entry);
kc_spin_unlock();
qti_pfk_ice_invalidate_key(entry->key_index, ice_dev);
kc_spin_lock();
kc_entry_finish_invalidating(entry);
kc_spin_unlock();
return 0;
}
/**
* pfk_kc_clear() - clear the table and remove all keys from ICE
*
* Return 0 on success, error otherwise
*
*/
int pfk_kc_clear(struct ice_device *ice_dev)
{
struct kc_entry *entry = NULL;
int i = 0;
int res = 0;
if (!kc_is_ready())
return -ENODEV;
kc_spin_lock();
for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
entry = kc_entry_at_index(i, ice_dev);
res = kc_entry_start_invalidating(entry);
if (res != 0) {
kc_spin_unlock();
goto out;
}
kc_clear_entry(entry);
}
kc_spin_unlock();
for (i = 0; i < PFK_KC_TABLE_SIZE; i++)
qti_pfk_ice_invalidate_key(
kc_entry_at_index(i, ice_dev)->key_index, ice_dev);
/* fall through */
res = 0;
out:
kc_spin_lock();
for (i = 0; i < PFK_KC_TABLE_SIZE; i++)
kc_entry_finish_invalidating(kc_entry_at_index(i, ice_dev));
kc_spin_unlock();
return res;
}
/**
* pfk_kc_clear_on_reset() - clear the table and remove all keys from ICE
* The assumption is that at this point we don't have any pending transactions
* Also, there is no need to clear keys from ICE
*
* Return 0 on success, error otherwise
*
*/
void pfk_kc_clear_on_reset(struct ice_device *ice_dev)
{
struct kc_entry *entry = NULL;
int i = 0;
if (!kc_is_ready())
return;
kc_spin_lock();
for (i = 0; i < PFK_KC_TABLE_SIZE; i++) {
entry = kc_entry_at_index(i, ice_dev);
kc_clear_entry(entry);
}
kc_spin_unlock();
}
static int pfk_kc_find_storage_type(char **device)
{
char boot[20] = {'\0'};
char *match = (char *)strnstr(saved_command_line,
"androidboot.bootdevice=",
strlen(saved_command_line));
if (match) {
memcpy(boot, (match + strlen("androidboot.bootdevice=")),
sizeof(boot) - 1);
if (strnstr(boot, PFK_UFS, strlen(boot)))
*device = PFK_UFS;
return 0;
}
return -EINVAL;
}
int pfk_kc_initialize_key_table(struct ice_device *ice_dev)
{
int res = 0;
struct kc_entry *kc_table;
kc_table = kzalloc(PFK_KC_TABLE_SIZE*sizeof(struct kc_entry),
GFP_KERNEL);
if (!kc_table) {
res = -ENOMEM;
pr_err("%s: Error %d allocating memory for key table\n",
__func__, res);
}
ice_dev->key_table = kc_table;
pfk_kc_init(ice_dev);
return res;
}
static int __init pfk_kc_pre_init(void)
{
return pfk_kc_find_storage_type(&s_type);
}
static void __exit pfk_kc_exit(void)
{
s_type = NULL;
}
module_init(pfk_kc_pre_init);
module_exit(pfk_kc_exit);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("Per-File-Key-KC driver");

View File

@ -1,29 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2015-2019, The Linux Foundation. All rights reserved.
*/
#ifndef PFK_KC_H_
#define PFK_KC_H_
#include <linux/types.h>
#include <crypto/ice.h>
int pfk_kc_deinit(void);
int pfk_kc_load_key_start(const unsigned char *key, size_t key_size,
const unsigned char *salt, size_t salt_size, u32 *key_index,
bool async, unsigned int data_unit, struct ice_device *ice_dev);
void pfk_kc_load_key_end(const unsigned char *key, size_t key_size,
const unsigned char *salt, size_t salt_size,
struct ice_device *ice_dev);
int pfk_kc_remove_key_with_salt(const unsigned char *key, size_t key_size,
const unsigned char *salt, size_t salt_size);
int pfk_kc_clear(struct ice_device *ice_dev);
void pfk_kc_clear_on_reset(struct ice_device *ice_dev);
int pfk_kc_initialize_key_table(struct ice_device *ice_dev);
const char *pfk_kc_get_storage_type(void);
extern char *saved_command_line;
#endif /* PFK_KC_H_ */

View File

@ -623,14 +623,6 @@ int security_inode_create(struct inode *dir, struct dentry *dentry, umode_t mode
}
EXPORT_SYMBOL_GPL(security_inode_create);
int security_inode_post_create(struct inode *dir, struct dentry *dentry,
umode_t mode)
{
if (unlikely(IS_PRIVATE(dir)))
return 0;
return call_int_hook(inode_post_create, 0, dir, dentry, mode);
}
int security_inode_link(struct dentry *old_dentry, struct inode *dir,
struct dentry *new_dentry)
{

View File

@ -26,7 +26,8 @@
#include <linux/in.h>
#include <linux/spinlock.h>
#include <net/net_namespace.h>
#include "security.h"
#include "flask.h"
#include "avc.h"
struct task_security_struct {
u32 osid; /* SID prior to last execve */
@ -63,8 +64,6 @@ struct inode_security_struct {
u32 sid; /* SID of this object */
u16 sclass; /* security class of this object */
unsigned char initialized; /* initialization flag */
u32 tag; /* Per-File-Encryption tag */
void *pfk_data; /* Per-File-Key data from ecryptfs */
spinlock_t lock;
};

View File

@ -15,6 +15,7 @@
#include <linux/types.h>
#include <linux/refcount.h>
#include <linux/workqueue.h>
#include "flask.h"
#define SECSID_NULL 0x00000000 /* unspecified SID */
#define SECSID_WILD 0xffffffff /* wildcard SID */