Revert "mmc: driver's changes from kernel msm-4.14 to msm-4.19"

This reverts commit 4f50c26c01.

This change is reverting as this commit have only SDCard changes,
and to add eMMC change, SDCard and eMMC should come together.
So reverting this change to add both eMMC and SDCard porting changes
in subsequent dependent change.

Change-Id: I450c8585b0c8e8af087475880a12425e5de4d1a0
Signed-off-by: Ram Prakash Gupta <rampraka@codeaurora.org>
This commit is contained in:
Ram Prakash Gupta 2019-04-10 12:43:04 +05:30
parent e1f4735c37
commit 7747af635a
42 changed files with 1733 additions and 11345 deletions

View File

@ -8,40 +8,6 @@ The following attributes are read/write.
force_ro Enforce read-only access even if write protect switch is off.
num_wr_reqs_to_start_packing This attribute is used to determine
the trigger for activating the write packing, in case the write
packing control feature is enabled.
When the MMC manages to reach a point where num_wr_reqs_to_start_packing
write requests could be packed, it enables the write packing feature.
This allows us to start the write packing only when it is beneficial
and has minimum affect on the read latency.
The number of potential packed requests that will trigger the packing
can be configured via sysfs by writing the required value to:
/sys/block/<block_dev_name>/num_wr_reqs_to_start_packing.
The default value of num_wr_reqs_to_start_packing was determined by
running parallel lmdd write and lmdd read operations and calculating
the max number of packed writes requests.
num_wr_reqs_to_start_packing This attribute is used to determine
the trigger for activating the write packing, in case the write
packing control feature is enabled.
When the MMC manages to reach a point where num_wr_reqs_to_start_packing
write requests could be packed, it enables the write packing feature.
This allows us to start the write packing only when it is beneficial
and has minimum affect on the read latency.
The number of potential packed requests that will trigger the packing
can be configured via sysfs by writing the required value to:
/sys/block/<block_dev_name>/num_wr_reqs_to_start_packing.
The default value of num_wr_reqs_to_start_packing was determined by
running parallel lmdd write and lmdd read operations and calculating
the max number of packed writes requests.
SD and MMC Device Attributes
============================
@ -109,51 +75,3 @@ Note on raw_rpmb_size_mult:
"raw_rpmb_size_mult" is a multiple of 128kB block.
RPMB size in byte is calculated by using the following equation:
RPMB partition size = 128kB x raw_rpmb_size_mult
SD/MMC/SDIO Clock Gating Attribute
==================================
Read and write access is provided to following attribute.
This attribute appears only if CONFIG_MMC_CLKGATE is enabled.
clkgate_delay Tune the clock gating delay with desired value in milliseconds.
echo <desired delay> > /sys/class/mmc_host/mmcX/clkgate_delay
SD/MMC/SDIO Clock Scaling Attributes
====================================
Read and write accesses are provided to following attributes.
polling_interval Measured in milliseconds, this attribute
defines how often we need to check the card
usage and make decisions on frequency scaling.
up_threshold This attribute defines what should be the
average card usage between the polling
interval for the mmc core to make a decision
on whether it should increase the frequency.
For example when it is set to '35' it means
that between the checking intervals the card
needs to be on average more than 35% in use to
scale up the frequency. The value should be
between 0 - 100 so that it can be compared
against load percentage.
down_threshold Similar to up_threshold, but on lowering the
frequency. For example, when it is set to '2'
it means that between the checking intervals
the card needs to be on average less than 2%
in use to scale down the clocks to minimum
frequency. The value should be between 0 - 100
so that it can be compared against load
percentage.
enable Enable clock scaling for hosts (and cards)
that support ultrahigh speed modes
(SDR104, DDR50, HS200).
echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/polling_interval
echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/up_threshold
echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/down_threshold
echo <desired value> > /sys/class/mmc_host/mmcX/clk_scaling/enable

View File

@ -12,16 +12,6 @@ menuconfig MMC
If you want MMC/SD/SDIO support, you should say Y here and
also to your specific host controller driver.
config MMC_PERF_PROFILING
bool "MMC performance profiling"
depends on MMC != n
default n
help
This enables the support for collecting performance numbers
for the MMC at the Queue and Host layers.
If you want to collect MMC performance numbers, say Y here.
if MMC
source "drivers/mmc/core/Kconfig"

View File

@ -61,17 +61,6 @@ config MMC_BLOCK_MINORS
If unsure, say 8 here.
config MMC_BLOCK_DEFERRED_RESUME
bool "Defer MMC layer resume until I/O is requested"
depends on MMC_BLOCK
default n
help
Say Y here to enable deferred MMC resume until I/O
is requested.
This will reduce overall resume latency and
save power when there is an SD card inserted but not being used.
config SDIO_UART
tristate "SDIO UART/GPS class support"
depends on TTY
@ -91,23 +80,3 @@ config MMC_TEST
This driver is only of interest to those developing or
testing a host driver. Most people should say N here.
config MMC_RING_BUFFER
bool "MMC_RING_BUFFER"
depends on MMC
default n
help
This enables the ring buffer tracing of significant
events for mmc driver to provide command history for
debugging purpose.
If unsure, say N.
config MMC_CLKGATE
bool "MMC host clock gating"
help
This will attempt to aggressively gate the clock to the MMC card.
This is done to save power due to gating off the logic and bus
noise when the MMC card is not in use. Your host driver has to
support handling this in order for it to be of any use.
If unsure, say N.

View File

@ -14,7 +14,6 @@ obj-$(CONFIG_PWRSEQ_SIMPLE) += pwrseq_simple.o
obj-$(CONFIG_PWRSEQ_SD8787) += pwrseq_sd8787.o
obj-$(CONFIG_PWRSEQ_EMMC) += pwrseq_emmc.o
mmc_core-$(CONFIG_DEBUG_FS) += debugfs.o
obj-$(CONFIG_MMC_RING_BUFFER) += ring_buffer.o
obj-$(CONFIG_MMC_BLOCK) += mmc_block.o
mmc_block-objs := block.o queue.o
obj-$(CONFIG_MMC_TEST) += mmc_test.o

View File

@ -31,7 +31,6 @@
#include <linux/cdev.h>
#include <linux/mutex.h>
#include <linux/scatterlist.h>
#include <linux/bitops.h>
#include <linux/string_helpers.h>
#include <linux/delay.h>
#include <linux/capability.h>
@ -42,7 +41,6 @@
#include <linux/mmc/ioctl.h>
#include <linux/mmc/card.h>
#include <linux/mmc/core.h>
#include <linux/mmc/host.h>
#include <linux/mmc/mmc.h>
#include <linux/mmc/sd.h>
@ -71,7 +69,7 @@ MODULE_ALIAS("mmc:block");
* second software timer to timeout the whole request, so 10 seconds should be
* ample.
*/
#define MMC_BLK_TIMEOUT_MS (30 * 1000)
#define MMC_BLK_TIMEOUT_MS (10 * 1000)
#define MMC_SANITIZE_REQ_TIMEOUT 240000
#define MMC_EXTRACT_INDEX_FROM_ARG(x) ((x & 0x00FF0000) >> 16)
#define MMC_EXTRACT_VALUE_FROM_ARG(x) ((x & 0x0000FF00) >> 8)
@ -112,7 +110,6 @@ struct mmc_blk_data {
unsigned int flags;
#define MMC_BLK_CMD23 (1 << 0) /* Can do SET_BLOCK_COUNT for multiblock */
#define MMC_BLK_REL_WR (1 << 1) /* MMC Reliable write support */
#define MMC_BLK_PACKED_CMD (1 << 2) /* MMC packed command support */
unsigned int usage;
unsigned int read_only;
@ -123,7 +120,7 @@ struct mmc_blk_data {
#define MMC_BLK_DISCARD BIT(2)
#define MMC_BLK_SECDISCARD BIT(3)
#define MMC_BLK_CQE_RECOVERY BIT(4)
#define MMC_BLK_FLUSH BIT(5)
/*
* Only set in main mmc_blk_data associated
* with mmc_card with dev_set_drvdata, and keeps
@ -215,11 +212,11 @@ static ssize_t power_ro_lock_show(struct device *dev,
struct mmc_blk_data *md = mmc_blk_get(dev_to_disk(dev));
struct mmc_card *card;
int locked = 0;
if (!md)
return -EINVAL;
card = md->queue.card;
if (card->ext_csd.boot_ro_lock & EXT_CSD_BOOT_WP_B_PERM_WP_EN)
locked = 2;
else if (card->ext_csd.boot_ro_lock & EXT_CSD_BOOT_WP_B_PWR_WP_EN)
@ -284,7 +281,6 @@ static ssize_t force_ro_show(struct device *dev, struct device_attribute *attr,
{
int ret;
struct mmc_blk_data *md = mmc_blk_get(dev_to_disk(dev));
if (!md)
return -EINVAL;
@ -302,7 +298,6 @@ static ssize_t force_ro_store(struct device *dev, struct device_attribute *attr,
char *end;
struct mmc_blk_data *md = mmc_blk_get(dev_to_disk(dev));
unsigned long set = simple_strtoul(buf, &end, 0);
if (!md)
return -EINVAL;
@ -322,6 +317,9 @@ static int mmc_blk_open(struct block_device *bdev, fmode_t mode)
{
struct mmc_blk_data *md = mmc_blk_get(bdev->bd_disk);
int ret = -ENXIO;
if (!md)
return -EINVAL;
mutex_lock(&block_mutex);
if (md) {
@ -461,10 +459,9 @@ static int ioctl_do_sanitize(struct mmc_card *card)
{
int err;
if (!mmc_can_sanitize(card) &&
(card->host->caps2 & MMC_CAP2_SANITIZE)) {
if (!mmc_can_sanitize(card)) {
pr_warn("%s: %s - SANITIZE is not supported\n",
mmc_hostname(card->host), __func__);
mmc_hostname(card->host), __func__);
err = -EOPNOTSUPP;
goto out;
}
@ -656,13 +653,13 @@ static int mmc_blk_ioctl_cmd(struct mmc_blk_data *md,
struct request *req;
idata = mmc_blk_ioctl_copy_from_user(ic_ptr);
if (IS_ERR_OR_NULL(idata))
if (IS_ERR(idata))
return PTR_ERR(idata);
/* This will be NULL on non-RPMB ioctl():s */
idata->rpmb = rpmb;
card = md->queue.card;
if (IS_ERR_OR_NULL(card)) {
if (IS_ERR(card)) {
err = PTR_ERR(card);
goto cmd_done;
}
@ -873,8 +870,7 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
int ret = 0;
struct mmc_blk_data *main_md = dev_get_drvdata(&card->dev);
if ((main_md->part_curr == part_type) &&
(card->part_curr == part_type))
if (main_md->part_curr == part_type)
return 0;
if (mmc_card_mmc(card)) {
@ -891,15 +887,11 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
EXT_CSD_PART_CONFIG, part_config,
card->ext_csd.part_time);
if (ret) {
pr_err("%s: %s: switch failure, %d -> %d\n",
mmc_hostname(card->host), __func__,
main_md->part_curr, part_type);
mmc_blk_part_switch_post(card, part_type);
return ret;
}
card->ext_csd.part_config = part_config;
card->part_curr = part_type;
ret = mmc_blk_part_switch_post(card, main_md->part_curr);
}
@ -1055,15 +1047,8 @@ static int mmc_blk_reset(struct mmc_blk_data *md, struct mmc_host *host,
md->reset_done |= type;
err = mmc_hw_reset(host);
if (err && err != -EOPNOTSUPP) {
/* We failed to reset so we need to abort the request */
pr_err("%s: %s: failed to reset %d\n", mmc_hostname(host),
__func__, err);
return -ENODEV;
}
/* Ensure we switch back to the correct partition */
if (host->card) {
if (err != -EOPNOTSUPP) {
struct mmc_blk_data *main_md =
dev_get_drvdata(&host->card->dev);
int part_err;
@ -1270,21 +1255,6 @@ static void mmc_blk_issue_flush(struct mmc_queue *mq, struct request *req)
int ret = 0;
ret = mmc_flush_cache(card);
if (ret == -ENODEV) {
pr_err("%s: %s: restart mmc card\n",
req->rq_disk->disk_name, __func__);
if (mmc_blk_reset(md, card->host, MMC_BLK_FLUSH))
pr_err("%s: %s: fail to restart mmc\n",
req->rq_disk->disk_name, __func__);
else
mmc_blk_reset_success(md, MMC_BLK_FLUSH);
}
if (ret) {
pr_err("%s: %s: notify flush error to upper layers\n",
req->rq_disk->disk_name, __func__);
ret = -EIO;
}
blk_mq_end_request(req, ret ? BLK_STS_IOERR : BLK_STS_OK);
}
@ -2994,10 +2964,6 @@ static int mmc_blk_probe(struct mmc_card *card)
dev_set_drvdata(&card->dev, md);
#ifdef CONFIG_MMC_BLOCK_DEFERRED_RESUME
mmc_set_bus_resume_policy(card->host, 1);
#endif
if (mmc_add_disk(md))
goto out;
@ -3009,7 +2975,7 @@ static int mmc_blk_probe(struct mmc_card *card)
/* Add two debugfs entries */
mmc_blk_add_debugfs(card, md);
pm_runtime_set_autosuspend_delay(&card->dev, MMC_AUTOSUSPEND_DELAY_MS);
pm_runtime_set_autosuspend_delay(&card->dev, 3000);
pm_runtime_use_autosuspend(&card->dev);
/*
@ -3046,9 +3012,6 @@ static void mmc_blk_remove(struct mmc_card *card)
pm_runtime_put_noidle(&card->dev);
mmc_blk_remove_req(md);
dev_set_drvdata(&card->dev, NULL);
#ifdef CONFIG_MMC_BLOCK_DEFERRED_RESUME
mmc_set_bus_resume_policy(card->host, 0);
#endif
destroy_workqueue(card->complete_wq);
}

View File

@ -134,16 +134,6 @@ static void mmc_bus_shutdown(struct device *dev)
struct mmc_host *host = card->host;
int ret;
if (!drv) {
pr_debug("%s: %s: drv is NULL\n", dev_name(dev), __func__);
return;
}
if (!card) {
pr_debug("%s: %s: card is NULL\n", dev_name(dev), __func__);
return;
}
if (dev->driver && drv->shutdown)
drv->shutdown(card);
@ -166,8 +156,6 @@ static int mmc_bus_suspend(struct device *dev)
if (ret)
return ret;
if (mmc_bus_needs_resume(host))
return 0;
ret = host->bus_ops->suspend(host);
if (ret)
pm_generic_resume(dev);
@ -181,17 +169,11 @@ static int mmc_bus_resume(struct device *dev)
struct mmc_host *host = card->host;
int ret;
if (mmc_bus_manual_resume(host)) {
host->bus_resume_flags |= MMC_BUSRESUME_NEEDS_RESUME;
goto skip_full_resume;
}
ret = host->bus_ops->resume(host);
if (ret)
pr_warn("%s: error %d during resume (card was removed?)\n",
mmc_hostname(host), ret);
skip_full_resume:
ret = pm_generic_resume(dev);
return ret;
}
@ -203,9 +185,6 @@ static int mmc_runtime_suspend(struct device *dev)
struct mmc_card *card = mmc_dev_to_card(dev);
struct mmc_host *host = card->host;
if (mmc_bus_needs_resume(host))
return 0;
return host->bus_ops->runtime_suspend(host);
}
@ -214,12 +193,8 @@ static int mmc_runtime_resume(struct device *dev)
struct mmc_card *card = mmc_dev_to_card(dev);
struct mmc_host *host = card->host;
if (mmc_bus_needs_resume(host))
host->bus_resume_flags &= ~MMC_BUSRESUME_NEEDS_RESUME;
return host->bus_ops->runtime_resume(host);
}
#endif /* !CONFIG_PM */
static const struct dev_pm_ops mmc_bus_pm_ops = {
@ -303,8 +278,6 @@ struct mmc_card *mmc_alloc_card(struct mmc_host *host, struct device_type *type)
card->dev.release = mmc_release_card;
card->dev.type = type;
spin_lock_init(&card->bkops.stats.lock);
return card;
}
@ -380,19 +353,13 @@ int mmc_add_card(struct mmc_card *card)
#endif
card->dev.of_node = mmc_of_find_child_device(card->host, 0);
if (mmc_card_sdio(card)) {
ret = device_init_wakeup(&card->dev, true);
if (ret)
pr_err("%s: %s: failed to init wakeup: %d\n",
mmc_hostname(card->host), __func__, ret);
}
device_enable_async_suspend(&card->dev);
ret = device_add(&card->dev);
if (ret)
return ret;
mmc_card_set_present(card);
device_enable_async_suspend(&card->dev);
return 0;
}

View File

@ -16,12 +16,12 @@
struct mmc_host;
struct mmc_card;
#define MMC_DEV_ATTR(name, fmt, args...) \
#define MMC_DEV_ATTR(name, fmt, args...) \
static ssize_t mmc_##name##_show (struct device *dev, struct device_attribute *attr, char *buf) \
{ \
struct mmc_card *card = mmc_dev_to_card(dev); \
{ \
struct mmc_card *card = mmc_dev_to_card(dev); \
return snprintf(buf, PAGE_SIZE, fmt, args); \
} \
} \
static DEVICE_ATTR(name, S_IRUGO, mmc_##name##_show, NULL)
struct mmc_card *mmc_alloc_card(struct mmc_host *host,

View File

@ -23,9 +23,8 @@
#define MMC_STATE_BLOCKADDR (1<<2) /* card uses block-addressing */
#define MMC_CARD_SDXC (1<<3) /* card is SDXC */
#define MMC_CARD_REMOVED (1<<4) /* card has been removed */
#define MMC_STATE_DOING_BKOPS (1<<5) /* card is doing manual BKOPS */
#define MMC_STATE_DOING_BKOPS (1<<5) /* card is doing BKOPS */
#define MMC_STATE_SUSPENDED (1<<6) /* card is suspended */
#define MMC_STATE_AUTO_BKOPS (1<<13) /* card is doing auto BKOPS */
#define mmc_card_present(c) ((c)->state & MMC_STATE_PRESENT)
#define mmc_card_readonly(c) ((c)->state & MMC_STATE_READONLY)
@ -34,7 +33,6 @@
#define mmc_card_removed(c) ((c) && ((c)->state & MMC_CARD_REMOVED))
#define mmc_card_doing_bkops(c) ((c)->state & MMC_STATE_DOING_BKOPS)
#define mmc_card_suspended(c) ((c)->state & MMC_STATE_SUSPENDED)
#define mmc_card_doing_auto_bkops(c) ((c)->state & MMC_STATE_AUTO_BKOPS)
#define mmc_card_set_present(c) ((c)->state |= MMC_STATE_PRESENT)
#define mmc_card_set_readonly(c) ((c)->state |= MMC_STATE_READONLY)
@ -45,8 +43,6 @@
#define mmc_card_clr_doing_bkops(c) ((c)->state &= ~MMC_STATE_DOING_BKOPS)
#define mmc_card_set_suspended(c) ((c)->state |= MMC_STATE_SUSPENDED)
#define mmc_card_clr_suspended(c) ((c)->state &= ~MMC_STATE_SUSPENDED)
#define mmc_card_set_auto_bkops(c) ((c)->state |= MMC_STATE_AUTO_BKOPS)
#define mmc_card_clr_auto_bkops(c) ((c)->state &= ~MMC_STATE_AUTO_BKOPS)
/*
* The world is not perfect and supplies us with broken mmc/sdio devices.

File diff suppressed because it is too large Load Diff

View File

@ -32,7 +32,6 @@ struct mmc_bus_ops {
int (*shutdown)(struct mmc_host *);
int (*hw_reset)(struct mmc_host *);
int (*sw_reset)(struct mmc_host *);
int (*change_bus_speed)(struct mmc_host *host, unsigned long *freq);
};
void mmc_attach_bus(struct mmc_host *host, const struct mmc_bus_ops *ops);
@ -45,11 +44,6 @@ void mmc_init_erase(struct mmc_card *card);
void mmc_set_chip_select(struct mmc_host *host, int mode);
void mmc_set_clock(struct mmc_host *host, unsigned int hz);
int mmc_clk_update_freq(struct mmc_host *host,
unsigned long freq, enum mmc_load state);
void mmc_gate_clock(struct mmc_host *host);
void mmc_ungate_clock(struct mmc_host *host);
void mmc_set_ungated(struct mmc_host *host);
void mmc_set_bus_mode(struct mmc_host *host, unsigned int mode);
void mmc_set_bus_width(struct mmc_host *host, unsigned int width);
u32 mmc_select_voltage(struct mmc_host *host, u32 ocr);
@ -70,8 +64,6 @@ static inline void mmc_delay(unsigned int ms)
{
if (ms <= 20)
usleep_range(ms * 1000, ms * 1250);
else if (ms < jiffies_to_msecs(2))
usleep_range(ms * 1000, (ms + 1) * 1000);
else
msleep(ms);
}
@ -97,12 +89,6 @@ void mmc_remove_host_debugfs(struct mmc_host *host);
void mmc_add_card_debugfs(struct mmc_card *card);
void mmc_remove_card_debugfs(struct mmc_card *card);
extern bool mmc_can_scale_clk(struct mmc_host *host);
extern int mmc_init_clk_scaling(struct mmc_host *host);
extern int mmc_resume_clk_scaling(struct mmc_host *host);
extern int mmc_exit_clk_scaling(struct mmc_host *host);
extern unsigned long mmc_get_max_frequency(struct mmc_host *host);
int mmc_execute_tuning(struct mmc_card *card);
int mmc_hs200_to_hs400(struct mmc_card *card);
int mmc_hs400_to_hs200(struct mmc_card *card);

View File

@ -33,26 +33,6 @@ module_param(fail_request, charp, 0);
#endif /* CONFIG_FAIL_MMC_REQUEST */
/* The debugfs functions are optimized away when CONFIG_DEBUG_FS isn't set. */
static int mmc_ring_buffer_show(struct seq_file *s, void *data)
{
struct mmc_host *mmc = s->private;
mmc_dump_trace_buffer(mmc, s);
return 0;
}
static int mmc_ring_buffer_open(struct inode *inode, struct file *file)
{
return single_open(file, mmc_ring_buffer_show, inode->i_private);
}
static const struct file_operations mmc_ring_buffer_fops = {
.open = mmc_ring_buffer_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int mmc_ios_show(struct seq_file *s, void *data)
{
static const char *vdd_str[] = {
@ -245,205 +225,6 @@ static int mmc_clock_opt_set(void *data, u64 val)
DEFINE_SIMPLE_ATTRIBUTE(mmc_clock_fops, mmc_clock_opt_get, mmc_clock_opt_set,
"%llu\n");
#include <linux/delay.h>
static int mmc_scale_get(void *data, u64 *val)
{
struct mmc_host *host = data;
*val = host->clk_scaling.curr_freq;
return 0;
}
static int mmc_scale_set(void *data, u64 val)
{
int err = 0;
struct mmc_host *host = data;
mmc_claim_host(host);
mmc_host_clk_hold(host);
/* change frequency from sysfs manually */
err = mmc_clk_update_freq(host, val, host->clk_scaling.state);
if (err == -EAGAIN)
err = 0;
else if (err)
pr_err("%s: clock scale to %llu failed with error %d\n",
mmc_hostname(host), val, err);
else
pr_debug("%s: clock change to %llu finished successfully (%s)\n",
mmc_hostname(host), val, current->comm);
mmc_host_clk_release(host);
mmc_release_host(host);
return err;
}
DEFINE_DEBUGFS_ATTRIBUTE(mmc_scale_fops, mmc_scale_get, mmc_scale_set,
"%llu\n");
static int mmc_max_clock_get(void *data, u64 *val)
{
struct mmc_host *host = data;
if (!host)
return -EINVAL;
*val = host->f_max;
return 0;
}
static int mmc_max_clock_set(void *data, u64 val)
{
struct mmc_host *host = data;
int err = -EINVAL;
unsigned long freq = val;
unsigned int old_freq;
if (!host || (val < host->f_min))
goto out;
mmc_claim_host(host);
if (host->bus_ops && host->bus_ops->change_bus_speed) {
old_freq = host->f_max;
host->f_max = freq;
err = host->bus_ops->change_bus_speed(host, &freq);
if (err)
host->f_max = old_freq;
}
mmc_release_host(host);
out:
return err;
}
DEFINE_DEBUGFS_ATTRIBUTE(mmc_max_clock_fops, mmc_max_clock_get,
mmc_max_clock_set, "%llu\n");
static int mmc_force_err_set(void *data, u64 val)
{
struct mmc_host *host = data;
if (host && host->card && host->ops &&
host->ops->force_err_irq) {
/*
* To access the force error irq reg, we need to make
* sure the host is powered up and host clock is ticking.
*/
mmc_get_card(host->card, NULL);
host->ops->force_err_irq(host, val);
mmc_put_card(host->card, NULL);
}
return 0;
}
DEFINE_DEBUGFS_ATTRIBUTE(mmc_force_err_fops, NULL, mmc_force_err_set, "%llu\n");
static int mmc_err_state_get(void *data, u64 *val)
{
struct mmc_host *host = data;
if (!host)
return -EINVAL;
*val = host->err_occurred ? 1 : 0;
return 0;
}
static int mmc_err_state_clear(void *data, u64 val)
{
struct mmc_host *host = data;
if (!host)
return -EINVAL;
host->err_occurred = false;
return 0;
}
DEFINE_DEBUGFS_ATTRIBUTE(mmc_err_state, mmc_err_state_get,
mmc_err_state_clear, "%llu\n");
static int mmc_err_stats_show(struct seq_file *file, void *data)
{
struct mmc_host *host = (struct mmc_host *)file->private;
if (!host)
return -EINVAL;
seq_printf(file, "# Command Timeout Occurred:\t %d\n",
host->err_stats[MMC_ERR_CMD_TIMEOUT]);
seq_printf(file, "# Command CRC Errors Occurred:\t %d\n",
host->err_stats[MMC_ERR_CMD_CRC]);
seq_printf(file, "# Data Timeout Occurred:\t %d\n",
host->err_stats[MMC_ERR_DAT_TIMEOUT]);
seq_printf(file, "# Data CRC Errors Occurred:\t %d\n",
host->err_stats[MMC_ERR_DAT_CRC]);
seq_printf(file, "# Auto-Cmd Error Occurred:\t %d\n",
host->err_stats[MMC_ERR_ADMA]);
seq_printf(file, "# ADMA Error Occurred:\t %d\n",
host->err_stats[MMC_ERR_ADMA]);
seq_printf(file, "# Tuning Error Occurred:\t %d\n",
host->err_stats[MMC_ERR_TUNING]);
seq_printf(file, "# CMDQ RED Errors:\t\t %d\n",
host->err_stats[MMC_ERR_CMDQ_RED]);
seq_printf(file, "# CMDQ GCE Errors:\t\t %d\n",
host->err_stats[MMC_ERR_CMDQ_GCE]);
seq_printf(file, "# CMDQ ICCE Errors:\t\t %d\n",
host->err_stats[MMC_ERR_CMDQ_ICCE]);
seq_printf(file, "# Request Timedout:\t %d\n",
host->err_stats[MMC_ERR_REQ_TIMEOUT]);
seq_printf(file, "# CMDQ Request Timedout:\t %d\n",
host->err_stats[MMC_ERR_CMDQ_REQ_TIMEOUT]);
seq_printf(file, "# ICE Config Errors:\t\t %d\n",
host->err_stats[MMC_ERR_ICE_CFG]);
return 0;
}
static int mmc_err_stats_open(struct inode *inode, struct file *file)
{
return single_open(file, mmc_err_stats_show, inode->i_private);
}
static ssize_t mmc_err_stats_write(struct file *filp, const char __user *ubuf,
size_t cnt, loff_t *ppos)
{
struct mmc_host *host = filp->f_mapping->host->i_private;
if (!host)
return -EINVAL;
pr_debug("%s: Resetting MMC error statistics\n", __func__);
memset(host->err_stats, 0, sizeof(host->err_stats));
return cnt;
}
static const struct file_operations mmc_err_stats_fops = {
.open = mmc_err_stats_open,
.read = seq_read,
.write = mmc_err_stats_write,
};
void mmc_add_host_debugfs(struct mmc_host *host)
{
struct dentry *root;
@ -462,43 +243,6 @@ void mmc_add_host_debugfs(struct mmc_host *host)
if (!debugfs_create_file("ios", 0400, root, host, &mmc_ios_fops))
goto err_node;
if (!debugfs_create_file("max_clock", 0600, root, host,
&mmc_max_clock_fops))
goto err_node;
if (!debugfs_create_file("scale", 0600, root, host,
&mmc_scale_fops))
goto err_node;
if (!debugfs_create_bool("skip_clk_scale_freq_update",
0600, root,
&host->clk_scaling.skip_clk_scale_freq_update))
goto err_node;
if (!debugfs_create_bool("crash_on_err",
0600, root,
&host->crash_on_err))
goto err_node;
#ifdef CONFIG_MMC_RING_BUFFER
if (!debugfs_create_file("ring_buffer", 0400,
root, host, &mmc_ring_buffer_fops))
goto err_node;
#endif
if (!debugfs_create_file("err_state", 0600, root, host,
&mmc_err_state))
goto err_node;
if (!debugfs_create_file("err_stats", 0600, root, host,
&mmc_err_stats_fops))
goto err_node;
#ifdef CONFIG_MMC_CLKGATE
if (!debugfs_create_u32("clk_delay", 0600,
root, &host->clk_delay))
goto err_node;
#endif
if (!debugfs_create_x32("caps", 0400, root, &host->caps))
goto err_node;
@ -518,10 +262,6 @@ void mmc_add_host_debugfs(struct mmc_host *host)
&host->fail_mmc_request)))
goto err_node;
#endif
if (!debugfs_create_file("force_error", 0200, root, host,
&mmc_force_err_fops))
goto err_node;
return;
err_node:
@ -536,89 +276,6 @@ void mmc_remove_host_debugfs(struct mmc_host *host)
debugfs_remove_recursive(host->debugfs_root);
}
static int mmc_bkops_stats_read(struct seq_file *file, void *data)
{
struct mmc_card *card = file->private;
struct mmc_bkops_stats *stats;
int i;
if (!card)
return -EINVAL;
stats = &card->bkops.stats;
if (!stats->enabled) {
pr_info("%s: bkops statistics are disabled\n",
mmc_hostname(card->host));
goto exit;
}
spin_lock(&stats->lock);
seq_printf(file, "%s: bkops statistics:\n",
mmc_hostname(card->host));
seq_printf(file, "%s: BKOPS: sent START_BKOPS to device: %u\n",
mmc_hostname(card->host), stats->manual_start);
seq_printf(file, "%s: BKOPS: stopped due to HPI: %u\n",
mmc_hostname(card->host), stats->hpi);
seq_printf(file, "%s: BKOPS: sent AUTO_EN set to 1: %u\n",
mmc_hostname(card->host), stats->auto_start);
seq_printf(file, "%s: BKOPS: sent AUTO_EN set to 0: %u\n",
mmc_hostname(card->host), stats->auto_stop);
for (i = 0 ; i < MMC_BKOPS_NUM_SEVERITY_LEVELS ; ++i)
seq_printf(file, "%s: BKOPS: due to level %d: %u\n",
mmc_hostname(card->host), i, stats->level[i]);
spin_unlock(&stats->lock);
exit:
return 0;
}
static ssize_t mmc_bkops_stats_write(struct file *filp,
const char __user *ubuf, size_t cnt,
loff_t *ppos)
{
struct mmc_card *card = filp->f_mapping->host->i_private;
int value;
struct mmc_bkops_stats *stats;
int err;
if (!card)
return cnt;
stats = &card->bkops.stats;
err = kstrtoint_from_user(ubuf, cnt, 0, &value);
if (err) {
pr_err("%s: %s: error parsing input from user (%d)\n",
mmc_hostname(card->host), __func__, err);
return err;
}
if (value) {
mmc_blk_init_bkops_statistics(card);
} else {
spin_lock(&stats->lock);
stats->enabled = false;
spin_unlock(&stats->lock);
}
return cnt;
}
static int mmc_bkops_stats_open(struct inode *inode, struct file *file)
{
return single_open(file, mmc_bkops_stats_read, inode->i_private);
}
static const struct file_operations mmc_dbg_bkops_stats_fops = {
.open = mmc_bkops_stats_open,
.read = seq_read,
.write = mmc_bkops_stats_write,
};
void mmc_add_card_debugfs(struct mmc_card *card)
{
struct mmc_host *host = card->host;
@ -641,13 +298,6 @@ void mmc_add_card_debugfs(struct mmc_card *card)
if (!debugfs_create_x32("state", 0400, root, &card->state))
goto err;
if (mmc_card_mmc(card) && (card->ext_csd.rev >= 5) &&
(mmc_card_configured_auto_bkops(card) ||
mmc_card_configured_manual_bkops(card)))
if (!debugfs_create_file("bkops_stats", 0400, root, card,
&mmc_dbg_bkops_stats_fops))
goto err;
return;
err:

View File

@ -24,8 +24,6 @@
#include <linux/mmc/host.h>
#include <linux/mmc/card.h>
#include <linux/mmc/ring_buffer.h>
#include <linux/mmc/slot-gpio.h>
#include "core.h"
@ -36,10 +34,6 @@
#define cls_dev_to_mmc_host(d) container_of(d, struct mmc_host, class_dev)
#define MMC_DEVFRQ_DEFAULT_UP_THRESHOLD 35
#define MMC_DEVFRQ_DEFAULT_DOWN_THRESHOLD 5
#define MMC_DEVFRQ_DEFAULT_POLLING_MSEC 100
static DEFINE_IDA(mmc_host_ida);
static void mmc_host_classdev_release(struct device *dev)
@ -49,28 +43,9 @@ static void mmc_host_classdev_release(struct device *dev)
kfree(host);
}
static int mmc_host_prepare(struct device *dev)
{
/*
* Since mmc_host is a virtual device, we don't have to do anything.
* If we return a positive value, the pm framework will consider that
* the runtime suspend and system suspend of this device is same and
* will set direct_complete flag as true. We don't want this as the
* mmc_host always has positive disable_depth and setting the flag
* will not speed up the suspend process.
* So return 0.
*/
return 0;
}
static const struct dev_pm_ops mmc_pm_ops = {
.prepare = mmc_host_prepare,
};
static struct class mmc_host_class = {
.name = "mmc_host",
.dev_release = mmc_host_classdev_release,
.pm = &mmc_pm_ops,
};
int mmc_register_host_class(void)
@ -83,302 +58,6 @@ void mmc_unregister_host_class(void)
class_unregister(&mmc_host_class);
}
#ifdef CONFIG_MMC_CLKGATE
static ssize_t clkgate_delay_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
return snprintf(buf, PAGE_SIZE, "%lu\n", host->clkgate_delay);
}
static ssize_t clkgate_delay_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
unsigned long flags, value;
if (kstrtoul(buf, 0, &value))
return -EINVAL;
spin_lock_irqsave(&host->clk_lock, flags);
host->clkgate_delay = value;
spin_unlock_irqrestore(&host->clk_lock, flags);
return count;
}
/*
* Enabling clock gating will make the core call out to the host
* once up and once down when it performs a request or card operation
* intermingled in any fashion. The driver will see this through
* set_ios() operations with ios.clock field set to 0 to gate (disable)
* the block clock, and to the old frequency to enable it again.
*/
static void mmc_host_clk_gate_delayed(struct mmc_host *host)
{
unsigned long tick_ns;
unsigned long freq = host->ios.clock;
unsigned long flags;
if (!freq) {
pr_debug("%s: frequency set to 0 in disable function, this means the clock is already disabled.\n",
mmc_hostname(host));
return;
}
/*
* New requests may have appeared while we were scheduling,
* then there is no reason to delay the check before
* clk_disable().
*/
spin_lock_irqsave(&host->clk_lock, flags);
/*
* Delay n bus cycles (at least 8 from MMC spec) before attempting
* to disable the MCI block clock. The reference count may have
* gone up again after this delay due to rescheduling!
*/
if (!host->clk_requests) {
spin_unlock_irqrestore(&host->clk_lock, flags);
tick_ns = DIV_ROUND_UP(1000000000, freq);
ndelay(host->clk_delay * tick_ns);
} else {
/* New users appeared while waiting for this work */
spin_unlock_irqrestore(&host->clk_lock, flags);
return;
}
mutex_lock(&host->clk_gate_mutex);
spin_lock_irqsave(&host->clk_lock, flags);
if (!host->clk_requests) {
spin_unlock_irqrestore(&host->clk_lock, flags);
/* This will set host->ios.clock to 0 */
mmc_gate_clock(host);
spin_lock_irqsave(&host->clk_lock, flags);
pr_debug("%s: gated MCI clock\n", mmc_hostname(host));
}
spin_unlock_irqrestore(&host->clk_lock, flags);
mutex_unlock(&host->clk_gate_mutex);
}
/*
* Internal work. Work to disable the clock at some later point.
*/
static void mmc_host_clk_gate_work(struct work_struct *work)
{
struct mmc_host *host = container_of(work, struct mmc_host,
clk_gate_work.work);
mmc_host_clk_gate_delayed(host);
}
/**
* mmc_host_clk_hold - ungate hardware MCI clocks
* @host: host to ungate.
*
* Makes sure the host ios.clock is restored to a non-zero value
* past this call. Increase clock reference count and ungate clock
* if we're the first user.
*/
void mmc_host_clk_hold(struct mmc_host *host)
{
unsigned long flags;
/* cancel any clock gating work scheduled by mmc_host_clk_release() */
cancel_delayed_work_sync(&host->clk_gate_work);
mutex_lock(&host->clk_gate_mutex);
spin_lock_irqsave(&host->clk_lock, flags);
if (host->clk_gated) {
spin_unlock_irqrestore(&host->clk_lock, flags);
mmc_ungate_clock(host);
spin_lock_irqsave(&host->clk_lock, flags);
pr_debug("%s: ungated MCI clock\n", mmc_hostname(host));
}
host->clk_requests++;
spin_unlock_irqrestore(&host->clk_lock, flags);
mutex_unlock(&host->clk_gate_mutex);
}
/**
* mmc_host_may_gate_card - check if this card may be gated
* @card: card to check.
*/
bool mmc_host_may_gate_card(struct mmc_card *card)
{
/* If there is no card we may gate it */
if (!card)
return true;
/*
* SDIO3.0 card allows the clock to be gated off so check if
* that is the case or not.
*/
if (mmc_card_sdio(card) && card->cccr.async_intr_sup)
return true;
/*
* Don't gate SDIO cards! These need to be clocked at all times
* since they may be independent systems generating interrupts
* and other events. The clock requests counter from the core will
* go down to zero since the core does not need it, but we will not
* gate the clock, because there is somebody out there that may still
* be using it.
*/
return !(card->quirks & MMC_QUIRK_BROKEN_CLK_GATING);
}
/**
* mmc_host_clk_release - gate off hardware MCI clocks
* @host: host to gate.
*
* Calls the host driver with ios.clock set to zero as often as possible
* in order to gate off hardware MCI clocks. Decrease clock reference
* count and schedule disabling of clock.
*/
void mmc_host_clk_release(struct mmc_host *host)
{
unsigned long flags;
spin_lock_irqsave(&host->clk_lock, flags);
host->clk_requests--;
if (mmc_host_may_gate_card(host->card) &&
!host->clk_requests)
queue_delayed_work(host->clk_gate_wq, &host->clk_gate_work,
msecs_to_jiffies(host->clkgate_delay));
spin_unlock_irqrestore(&host->clk_lock, flags);
}
/**
* mmc_host_clk_rate - get current clock frequency setting
* @host: host to get the clock frequency for.
*
* Returns current clock frequency regardless of gating.
*/
unsigned int mmc_host_clk_rate(struct mmc_host *host)
{
unsigned long freq;
unsigned long flags;
spin_lock_irqsave(&host->clk_lock, flags);
if (host->clk_gated)
freq = host->clk_old;
else
freq = host->ios.clock;
spin_unlock_irqrestore(&host->clk_lock, flags);
return freq;
}
/**
* mmc_host_clk_init - set up clock gating code
* @host: host with potential clock to control
*/
static inline void mmc_host_clk_init(struct mmc_host *host)
{
host->clk_requests = 0;
/* Hold MCI clock for 8 cycles by default */
host->clk_delay = 8;
/*
* Default clock gating delay is 0ms to avoid wasting power.
* This value can be tuned by writing into sysfs entry.
*/
host->clkgate_delay = 0;
host->clk_gated = false;
INIT_DELAYED_WORK(&host->clk_gate_work, mmc_host_clk_gate_work);
spin_lock_init(&host->clk_lock);
mutex_init(&host->clk_gate_mutex);
}
/**
* mmc_host_clk_exit - shut down clock gating code
* @host: host with potential clock to control
*/
static inline void mmc_host_clk_exit(struct mmc_host *host)
{
/*
* Wait for any outstanding gate and then make sure we're
* ungated before exiting.
*/
if (cancel_delayed_work_sync(&host->clk_gate_work))
mmc_host_clk_gate_delayed(host);
if (host->clk_gated)
mmc_host_clk_hold(host);
if (host->clk_gate_wq)
destroy_workqueue(host->clk_gate_wq);
/* There should be only one user now */
WARN_ON(host->clk_requests > 1);
}
static inline void mmc_host_clk_sysfs_init(struct mmc_host *host)
{
host->clkgate_delay_attr.show = clkgate_delay_show;
host->clkgate_delay_attr.store = clkgate_delay_store;
sysfs_attr_init(&host->clkgate_delay_attr.attr);
host->clkgate_delay_attr.attr.name = "clkgate_delay";
host->clkgate_delay_attr.attr.mode = 0644;
if (device_create_file(&host->class_dev, &host->clkgate_delay_attr))
pr_err("%s: Failed to create clkgate_delay sysfs entry\n",
mmc_hostname(host));
}
static inline bool mmc_host_clk_gate_wq_init(struct mmc_host *host)
{
char *wq = NULL;
int wq_nl;
bool ret = true;
wq_nl = sizeof("mmc_clk_gate/") + sizeof(mmc_hostname(host)) + 1;
wq = kzalloc(wq_nl, GFP_KERNEL);
if (!wq) {
ret = false;
goto out;
}
snprintf(wq, wq_nl, "mmc_clk_gate/%s", mmc_hostname(host));
/*
* Create a work queue with flag WQ_MEM_RECLAIM set for
* mmc clock gate work. Because mmc thread is created with
* flag PF_MEMALLOC set, kernel will check for work queue
* flag WQ_MEM_RECLAIM when flush the work queue. If work
* queue flag WQ_MEM_RECLAIM is not set, kernel warning
* will be triggered.
*/
host->clk_gate_wq = create_workqueue(wq);
if (!host->clk_gate_wq) {
ret = false;
dev_err(host->parent,
"failed to create clock gate work queue\n");
}
kfree(wq);
out:
return ret;
}
#else
static inline void mmc_host_clk_init(struct mmc_host *host)
{
}
static inline void mmc_host_clk_exit(struct mmc_host *host)
{
}
static inline void mmc_host_clk_sysfs_init(struct mmc_host *host)
{
}
bool mmc_host_may_gate_card(struct mmc_card *card)
{
return false;
}
static inline bool mmc_host_clk_gate_wq_init(struct mmc_host *host)
{
return true;
}
#endif
void mmc_retune_enable(struct mmc_host *host)
{
host->can_retune = 1;
@ -386,7 +65,6 @@ void mmc_retune_enable(struct mmc_host *host)
mod_timer(&host->retune_timer,
jiffies + host->retune_period * HZ);
}
EXPORT_SYMBOL(mmc_retune_enable);
/*
* Pause re-tuning for a small set of operations. The pause begins after the
@ -419,7 +97,6 @@ void mmc_retune_disable(struct mmc_host *host)
host->retune_now = 0;
host->need_retune = 0;
}
EXPORT_SYMBOL(mmc_retune_disable);
void mmc_retune_timer_stop(struct mmc_host *host)
{
@ -713,13 +390,6 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
return NULL;
}
if (!mmc_host_clk_gate_wq_init(host)) {
kfree(host);
return NULL;
}
mmc_host_clk_init(host);
spin_lock_init(&host->lock);
init_waitqueue_head(&host->wq);
INIT_DELAYED_WORK(&host->detect, mmc_rescan);
@ -745,214 +415,6 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
EXPORT_SYMBOL(mmc_alloc_host);
static ssize_t enable_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
if (!host)
return -EINVAL;
return snprintf(buf, PAGE_SIZE, "%d\n", mmc_can_scale_clk(host));
}
static ssize_t enable_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
unsigned long value;
if (!host || !host->card || kstrtoul(buf, 0, &value))
return -EINVAL;
mmc_get_card(host->card, NULL);
if (!value) {
/* Suspend the clock scaling and mask host capability */
if (host->clk_scaling.enable)
mmc_suspend_clk_scaling(host);
host->clk_scaling.enable = false;
host->caps2 &= ~MMC_CAP2_CLK_SCALE;
host->clk_scaling.state = MMC_LOAD_HIGH;
/* Set to max. frequency when disabling */
mmc_clk_update_freq(host, host->card->clk_scaling_highest,
host->clk_scaling.state);
} else if (value) {
/* Unmask host capability and resume scaling */
host->caps2 |= MMC_CAP2_CLK_SCALE;
if (!host->clk_scaling.enable) {
host->clk_scaling.enable = true;
mmc_resume_clk_scaling(host);
}
}
mmc_put_card(host->card, NULL);
return count;
}
static ssize_t up_threshold_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
if (!host)
return -EINVAL;
return snprintf(buf, PAGE_SIZE, "%d\n", host->clk_scaling.upthreshold);
}
#define MAX_PERCENTAGE 100
static ssize_t up_threshold_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
unsigned long value;
if (!host || kstrtoul(buf, 0, &value) || (value > MAX_PERCENTAGE))
return -EINVAL;
host->clk_scaling.upthreshold = value;
pr_debug("%s: clkscale_up_thresh set to %lu\n",
mmc_hostname(host), value);
return count;
}
static ssize_t down_threshold_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
if (!host)
return -EINVAL;
return snprintf(buf, PAGE_SIZE, "%d\n",
host->clk_scaling.downthreshold);
}
static ssize_t down_threshold_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
unsigned long value;
if (!host || kstrtoul(buf, 0, &value) || (value > MAX_PERCENTAGE))
return -EINVAL;
host->clk_scaling.downthreshold = value;
pr_debug("%s: clkscale_down_thresh set to %lu\n",
mmc_hostname(host), value);
return count;
}
static ssize_t polling_interval_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
if (!host)
return -EINVAL;
return snprintf(buf, PAGE_SIZE, "%lu milliseconds\n",
host->clk_scaling.polling_delay_ms);
}
static ssize_t polling_interval_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
unsigned long value;
if (!host || kstrtoul(buf, 0, &value))
return -EINVAL;
host->clk_scaling.polling_delay_ms = value;
pr_debug("%s: clkscale_polling_delay_ms set to %lu\n",
mmc_hostname(host), value);
return count;
}
DEVICE_ATTR_RW(enable);
DEVICE_ATTR_RW(polling_interval);
DEVICE_ATTR_RW(up_threshold);
DEVICE_ATTR_RW(down_threshold);
static struct attribute *clk_scaling_attrs[] = {
&dev_attr_enable.attr,
&dev_attr_up_threshold.attr,
&dev_attr_down_threshold.attr,
&dev_attr_polling_interval.attr,
NULL,
};
static struct attribute_group clk_scaling_attr_grp = {
.name = "clk_scaling",
.attrs = clk_scaling_attrs,
};
#ifdef CONFIG_MMC_PERF_PROFILING
static ssize_t
perf_show(struct device *dev, struct device_attribute *attr, char *buf)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
int64_t rtime_drv, wtime_drv;
unsigned long rbytes_drv, wbytes_drv, flags;
spin_lock_irqsave(&host->lock, flags);
rbytes_drv = host->perf.rbytes_drv;
wbytes_drv = host->perf.wbytes_drv;
rtime_drv = ktime_to_us(host->perf.rtime_drv);
wtime_drv = ktime_to_us(host->perf.wtime_drv);
spin_unlock_irqrestore(&host->lock, flags);
return snprintf(buf, PAGE_SIZE, "Write performance at driver Level: %lu bytes in %lld microseconds. Read performance at driver Level: %lu bytes in %lld microseconds\n",
wbytes_drv, wtime_drv,
rbytes_drv, rtime_drv);
}
static ssize_t
perf_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
int64_t value;
unsigned long flags;
if (kstrtou64(buf, 0, &value) < 0)
return -EINVAL;
spin_lock_irqsave(&host->lock, flags);
if (!value) {
memset(&host->perf, 0, sizeof(host->perf));
host->perf_enable = false;
} else {
host->perf_enable = true;
}
spin_unlock_irqrestore(&host->lock, flags);
return count;
}
static DEVICE_ATTR_RW(perf);
#endif
static struct attribute *dev_attrs[] = {
#ifdef CONFIG_MMC_PERF_PROFILING
&dev_attr_perf.attr,
#endif
NULL,
};
static struct attribute_group dev_attr_grp = {
.attrs = dev_attrs,
};
/**
* mmc_add_host - initialise host hardware
* @host: mmc host
@ -974,26 +436,9 @@ int mmc_add_host(struct mmc_host *host)
led_trigger_register_simple(dev_name(&host->class_dev), &host->led);
host->clk_scaling.upthreshold = MMC_DEVFRQ_DEFAULT_UP_THRESHOLD;
host->clk_scaling.downthreshold = MMC_DEVFRQ_DEFAULT_DOWN_THRESHOLD;
host->clk_scaling.polling_delay_ms = MMC_DEVFRQ_DEFAULT_POLLING_MSEC;
host->clk_scaling.skip_clk_scale_freq_update = false;
#ifdef CONFIG_DEBUG_FS
mmc_add_host_debugfs(host);
#endif
mmc_host_clk_sysfs_init(host);
mmc_trace_init(host);
err = sysfs_create_group(&host->class_dev.kobj, &clk_scaling_attr_grp);
if (err)
pr_err("%s: failed to create clk scale sysfs group with err %d\n",
__func__, err);
err = sysfs_create_group(&host->class_dev.kobj, &dev_attr_grp);
if (err)
pr_err("%s: failed to create sysfs group with err %d\n",
__func__, err);
mmc_start_host(host);
if (!(host->pm_flags & MMC_PM_IGNORE_PM_NOTIFY))
@ -1022,14 +467,9 @@ void mmc_remove_host(struct mmc_host *host)
mmc_remove_host_debugfs(host);
#endif
sysfs_remove_group(&host->parent->kobj, &dev_attr_grp);
sysfs_remove_group(&host->class_dev.kobj, &clk_scaling_attr_grp);
device_del(&host->class_dev);
led_trigger_unregister_simple(host->led);
mmc_host_clk_exit(host);
}
EXPORT_SYMBOL(mmc_remove_host);

File diff suppressed because it is too large Load Diff

View File

@ -55,14 +55,6 @@ static const u8 tuning_blk_pattern_8bit[] = {
0xff, 0x77, 0x77, 0xff, 0x77, 0xbb, 0xdd, 0xee,
};
static void mmc_update_bkops_hpi(struct mmc_bkops_stats *stats)
{
spin_lock_irq(&stats->lock);
if (stats->enabled)
stats->hpi++;
spin_unlock_irq(&stats->lock);
}
int __mmc_send_status(struct mmc_card *card, u32 *status, unsigned int retries)
{
int err;
@ -463,7 +455,6 @@ static int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
u32 status = 0;
bool expired = false;
bool busy = false;
int retries = 5;
/* We have an unspecified cmd timeout, use the fallback value. */
if (!timeout_ms)
@ -505,52 +496,15 @@ static int mmc_poll_for_busy(struct mmc_card *card, unsigned int timeout_ms,
/* Timeout if the device still remains busy. */
if (expired && busy) {
pr_err("%s: Card stuck being busy! %s, timeout:%ums, retries:%d\n",
mmc_hostname(host), __func__,
timeout_ms, retries);
if (retries)
timeout = jiffies +
msecs_to_jiffies(timeout_ms);
else {
return -ETIMEDOUT;
}
retries--;
pr_err("%s: Card stuck being busy! %s\n",
mmc_hostname(host), __func__);
return -ETIMEDOUT;
}
} while (busy);
return 0;
}
/**
* mmc_prepare_switch - helper; prepare to modify EXT_CSD register
* @card: the MMC card associated with the data transfer
* @set: cmd set values
* @index: EXT_CSD register index
* @value: value to program into EXT_CSD register
* @tout_ms: timeout (ms) for operation performed by register write,
* timeout of zero implies maximum possible timeout
* @use_busy_signal: use the busy signal as response type
*
* Helper to prepare to modify EXT_CSD register for selected card.
*/
static inline void mmc_prepare_switch(struct mmc_command *cmd, u8 index,
u8 value, u8 set, unsigned int tout_ms,
bool use_busy_signal)
{
cmd->opcode = MMC_SWITCH;
cmd->arg = (MMC_SWITCH_MODE_WRITE_BYTE << 24) |
(index << 16) |
(value << 8) |
set;
cmd->flags = MMC_CMD_AC;
cmd->busy_timeout = tout_ms;
if (use_busy_signal)
cmd->flags |= MMC_RSP_SPI_R1B | MMC_RSP_R1B;
else
cmd->flags |= MMC_RSP_SPI_R1 | MMC_RSP_R1;
}
/**
* __mmc_switch - modify EXT_CSD register
* @card: the MMC card associated with the data transfer
@ -588,13 +542,25 @@ int __mmc_switch(struct mmc_card *card, u8 set, u8 index, u8 value,
(timeout_ms > host->max_busy_timeout))
use_r1b_resp = false;
mmc_prepare_switch(&cmd, index, value, set, timeout_ms,
use_r1b_resp);
cmd.opcode = MMC_SWITCH;
cmd.arg = (MMC_SWITCH_MODE_WRITE_BYTE << 24) |
(index << 16) |
(value << 8) |
set;
cmd.flags = MMC_CMD_AC;
if (use_r1b_resp) {
cmd.flags |= MMC_RSP_SPI_R1B | MMC_RSP_R1B;
/*
* A busy_timeout of zero means the host can decide to use
* whatever value it finds suitable.
*/
cmd.busy_timeout = timeout_ms;
} else {
cmd.flags |= MMC_RSP_SPI_R1 | MMC_RSP_R1;
}
if (index == EXT_CSD_SANITIZE_START)
cmd.sanitize_busy = true;
else if (index == EXT_CSD_BKOPS_START)
cmd.bkops_busy = true;
err = mmc_wait_for_cmd(host, &cmd, MMC_CMD_RETRIES);
if (err)
@ -788,10 +754,7 @@ mmc_send_bus_test(struct mmc_card *card, struct mmc_host *host, u8 opcode,
data.sg = &sg;
data.sg_len = 1;
data.timeout_ns = 1000000;
data.timeout_clks = 0;
mmc_set_data_timeout(&data, card);
sg_init_one(&sg, data_buf, len);
mmc_wait_for_req(host, &mrq);
err = 0;
@ -839,7 +802,7 @@ static int mmc_send_hpi_cmd(struct mmc_card *card, u32 *status)
unsigned int opcode;
int err;
if (!card->ext_csd.hpi_en) {
if (!card->ext_csd.hpi) {
pr_warn("%s: Card didn't support HPI command\n",
mmc_hostname(card->host));
return -EINVAL;
@ -856,7 +819,7 @@ static int mmc_send_hpi_cmd(struct mmc_card *card, u32 *status)
err = mmc_wait_for_cmd(card->host, &cmd, 0);
if (err) {
pr_debug("%s: error %d interrupting operation. "
pr_warn("%s: error %d interrupting operation. "
"HPI command response %#x\n", mmc_hostname(card->host),
err, cmd.resp[0]);
return err;
@ -921,13 +884,8 @@ int mmc_interrupt_hpi(struct mmc_card *card)
if (!err && R1_CURRENT_STATE(status) == R1_STATE_TRAN)
break;
if (time_after(jiffies, prg_wait)) {
err = mmc_send_status(card, &status);
if (!err && R1_CURRENT_STATE(status) != R1_STATE_TRAN)
err = -ETIMEDOUT;
else
break;
}
if (time_after(jiffies, prg_wait))
err = -ETIMEDOUT;
} while (!err);
out:
@ -952,11 +910,6 @@ int mmc_stop_bkops(struct mmc_card *card)
{
int err = 0;
if (unlikely(!mmc_card_configured_manual_bkops(card)))
goto out;
if (!mmc_card_doing_bkops(card))
goto out;
err = mmc_interrupt_hpi(card);
/*
@ -965,16 +918,14 @@ int mmc_stop_bkops(struct mmc_card *card)
*/
if (!err || (err == -EINVAL)) {
mmc_card_clr_doing_bkops(card);
mmc_update_bkops_hpi(&card->bkops.stats);
mmc_retune_release(card->host);
err = 0;
}
out:
return err;
}
EXPORT_SYMBOL(mmc_stop_bkops);
int mmc_read_bkops_status(struct mmc_card *card)
static int mmc_read_bkops_status(struct mmc_card *card)
{
int err;
u8 *ext_csd;
@ -983,17 +934,11 @@ int mmc_read_bkops_status(struct mmc_card *card)
if (err)
return err;
card->ext_csd.raw_bkops_status = ext_csd[EXT_CSD_BKOPS_STATUS] &
MMC_BKOPS_URGENCY_MASK;
card->ext_csd.raw_exception_status =
ext_csd[EXT_CSD_EXP_EVENTS_STATUS] &
(EXT_CSD_URGENT_BKOPS |
EXT_CSD_DYNCAP_NEEDED |
EXT_CSD_SYSPOOL_EXHAUSTED);
card->ext_csd.raw_bkops_status = ext_csd[EXT_CSD_BKOPS_STATUS];
card->ext_csd.raw_exception_status = ext_csd[EXT_CSD_EXP_EVENTS_STATUS];
kfree(ext_csd);
return 0;
}
EXPORT_SYMBOL(mmc_read_bkops_status);
/**
* mmc_start_bkops - start BKOPS for supported cards
@ -1069,23 +1014,12 @@ int mmc_flush_cache(struct mmc_card *card)
if (mmc_card_mmc(card) &&
(card->ext_csd.cache_size > 0) &&
(card->ext_csd.cache_ctrl & 1) &&
(!(card->quirks & MMC_QUIRK_CACHE_DISABLE))) {
(card->ext_csd.cache_ctrl & 1)) {
err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
EXT_CSD_FLUSH_CACHE, 1, 0);
if (err == -ETIMEDOUT) {
pr_err("%s: cache flush timeout\n",
mmc_hostname(card->host));
err = mmc_interrupt_hpi(card);
if (err) {
pr_err("%s: mmc_interrupt_hpi() failed (%d)\n",
mmc_hostname(card->host), err);
err = -ENODEV;
}
} else if (err) {
if (err)
pr_err("%s: cache flush error %d\n",
mmc_hostname(card->host), err);
}
}
return err;

View File

@ -45,7 +45,6 @@ void mmc_start_bkops(struct mmc_card *card, bool from_exception);
int mmc_flush_cache(struct mmc_card *card);
int mmc_cmdq_enable(struct mmc_card *card);
int mmc_cmdq_disable(struct mmc_card *card);
int mmc_read_bkops_status(struct mmc_card *card);
#endif

View File

@ -3114,8 +3114,7 @@ static ssize_t mtf_test_write(struct file *file, const char __user *buf,
}
#ifdef CONFIG_HIGHMEM
if (test->highmem)
__free_pages(test->highmem, BUFFER_ORDER);
__free_pages(test->highmem, BUFFER_ORDER);
#endif
kfree(test->buffer);
kfree(test);

View File

@ -206,12 +206,8 @@ static int __mmc_init_request(struct mmc_queue *mq, struct request *req,
gfp_t gfp)
{
struct mmc_queue_req *mq_rq = req_to_mmc_queue_req(req);
struct mmc_host *host;
if (!mq)
return -ENODEV;
host = mq->card->host;
struct mmc_card *card = mq->card;
struct mmc_host *host = card->host;
mq_rq->sg = mmc_alloc_sg(host->max_segs, gfp);
if (!mq_rq->sg)
@ -497,8 +493,7 @@ void mmc_cleanup_queue(struct mmc_queue *mq)
if (blk_queue_quiesced(q))
blk_mq_unquiesce_queue(q);
if (likely(!blk_queue_dead(q)))
blk_cleanup_queue(q);
blk_cleanup_queue(q);
/*
* A request can be completed before the next request, potentially

View File

@ -10,10 +10,6 @@
*
*/
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/export.h>
#include <linux/mmc/card.h>
#include <linux/mmc/sdio_ids.h>
#include "card.h"
@ -55,16 +51,6 @@ static const struct mmc_fixup mmc_blk_fixups[] = {
MMC_QUIRK_BLK_NO_CMD23),
MMC_FIXUP("MMC32G", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc,
MMC_QUIRK_BLK_NO_CMD23),
MMC_FIXUP(CID_NAME_ANY, CID_MANFID_TOSHIBA, CID_OEMID_ANY,
add_quirk_mmc, MMC_QUIRK_CMDQ_EMPTY_BEFORE_DCMD),
/*
* Some SD cards lockup while using CMD23 multiblock transfers.
*/
MMC_FIXUP("AF SD", CID_MANFID_ATP, CID_OEMID_ANY, add_quirk_sd,
MMC_QUIRK_BLK_NO_CMD23),
MMC_FIXUP("APUSD", CID_MANFID_APACER, 0x5048, add_quirk_sd,
MMC_QUIRK_BLK_NO_CMD23),
/*
* Some SD cards lockup while using CMD23 multiblock transfers.
@ -82,20 +68,6 @@ static const struct mmc_fixup mmc_blk_fixups[] = {
MMC_FIXUP("008GE0", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc,
MMC_QUIRK_LONG_READ_TIME),
/*
* Some Samsung MMC cards need longer data read timeout than
* indicated in CSD.
*/
MMC_FIXUP("Q7XSAB", CID_MANFID_SAMSUNG, 0x100, add_quirk_mmc,
MMC_QUIRK_LONG_READ_TIME),
/*
* Hynix eMMC cards need longer data read timeout than
* indicated in CSD.
*/
MMC_FIXUP(CID_NAME_ANY, CID_MANFID_HYNIX, CID_OEMID_ANY, add_quirk_mmc,
MMC_QUIRK_LONG_READ_TIME),
/*
* On these Samsung MoviNAND parts, performing secure erase or
* secure trim can result in unrecoverable corruption due to a
@ -127,10 +99,6 @@ static const struct mmc_fixup mmc_blk_fixups[] = {
MMC_FIXUP("V10016", CID_MANFID_KINGSTON, CID_OEMID_ANY, add_quirk_mmc,
MMC_QUIRK_TRIM_BROKEN),
/* Some INAND MCP devices advertise incorrect timeout values */
MMC_FIXUP("SEM04G", 0x45, CID_OEMID_ANY, add_quirk_mmc,
MMC_QUIRK_INAND_DATA_TIMEOUT),
END_FIXUP
};
@ -170,134 +138,12 @@ static const struct mmc_fixup sdio_fixup_methods[] = {
END_FIXUP
};
#ifndef SDIO_VENDOR_ID_TI
#define SDIO_VENDOR_ID_TI 0x0097
#endif
#ifndef SDIO_DEVICE_ID_TI_WL1271
#define SDIO_DEVICE_ID_TI_WL1271 0x4076
#endif
#ifndef SDIO_VENDOR_ID_STE
#define SDIO_VENDOR_ID_STE 0x0020
#endif
#ifndef SDIO_DEVICE_ID_STE_CW1200
#define SDIO_DEVICE_ID_STE_CW1200 0x2280
#endif
#ifndef SDIO_DEVICE_ID_MARVELL_8797_F0
#define SDIO_DEVICE_ID_MARVELL_8797_F0 0x9128
#endif
#ifndef SDIO_VENDOR_ID_MSM
#define SDIO_VENDOR_ID_MSM 0x0070
#endif
#ifndef SDIO_DEVICE_ID_MSM_WCN1314
#define SDIO_DEVICE_ID_MSM_WCN1314 0x2881
#endif
#ifndef SDIO_VENDOR_ID_MSM_QCA
#define SDIO_VENDOR_ID_MSM_QCA 0x271
#endif
#ifndef SDIO_DEVICE_ID_MSM_QCA_AR6003_1
#define SDIO_DEVICE_ID_MSM_QCA_AR6003_1 0x300
#endif
#ifndef SDIO_DEVICE_ID_MSM_QCA_AR6003_2
#define SDIO_DEVICE_ID_MSM_QCA_AR6003_2 0x301
#endif
#ifndef SDIO_DEVICE_ID_MSM_QCA_AR6004_1
#define SDIO_DEVICE_ID_MSM_QCA_AR6004_1 0x400
#endif
#ifndef SDIO_DEVICE_ID_MSM_QCA_AR6004_2
#define SDIO_DEVICE_ID_MSM_QCA_AR6004_2 0x401
#endif
#ifndef SDIO_VENDOR_ID_QCA6574
#define SDIO_VENDOR_ID_QCA6574 0x271
#endif
#ifndef SDIO_DEVICE_ID_QCA6574
#define SDIO_DEVICE_ID_QCA6574 0x50a
#endif
#ifndef SDIO_VENDOR_ID_QCA9377
#define SDIO_VENDOR_ID_QCA9377 0x271
#endif
#ifndef SDIO_DEVICE_ID_QCA9377
#define SDIO_DEVICE_ID_QCA9377 0x701
#endif
/*
* This hook just adds a quirk for all sdio devices
*/
static void add_quirk_for_sdio_devices(struct mmc_card *card, int data)
{
if (mmc_card_sdio(card))
card->quirks |= data;
}
static const struct mmc_fixup mmc_fixup_methods[] = {
/* by default sdio devices are considered CLK_GATING broken */
/* good cards will be whitelisted as they are tested */
SDIO_FIXUP(SDIO_ANY_ID, SDIO_ANY_ID,
add_quirk_for_sdio_devices,
MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_TI, SDIO_DEVICE_ID_TI_WL1271,
remove_quirk, MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_MSM, SDIO_DEVICE_ID_MSM_WCN1314,
remove_quirk, MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_MSM_QCA, SDIO_DEVICE_ID_MSM_QCA_AR6003_1,
remove_quirk, MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_MSM_QCA, SDIO_DEVICE_ID_MSM_QCA_AR6003_2,
remove_quirk, MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_MSM_QCA, SDIO_DEVICE_ID_MSM_QCA_AR6004_1,
remove_quirk, MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_MSM_QCA, SDIO_DEVICE_ID_MSM_QCA_AR6004_2,
remove_quirk, MMC_QUIRK_BROKEN_CLK_GATING),
SDIO_FIXUP(SDIO_VENDOR_ID_TI, SDIO_DEVICE_ID_TI_WL1271,
add_quirk, MMC_QUIRK_NONSTD_FUNC_IF),
SDIO_FIXUP(SDIO_VENDOR_ID_TI, SDIO_DEVICE_ID_TI_WL1271,
add_quirk, MMC_QUIRK_DISABLE_CD),
SDIO_FIXUP(SDIO_VENDOR_ID_STE, SDIO_DEVICE_ID_STE_CW1200,
add_quirk, MMC_QUIRK_BROKEN_BYTE_MODE_512),
SDIO_FIXUP(SDIO_VENDOR_ID_MARVELL, SDIO_DEVICE_ID_MARVELL_8797_F0,
add_quirk, MMC_QUIRK_BROKEN_IRQ_POLLING),
SDIO_FIXUP(SDIO_VENDOR_ID_QCA6574, SDIO_DEVICE_ID_QCA6574,
add_quirk, MMC_QUIRK_QCA6574_SETTINGS),
SDIO_FIXUP(SDIO_VENDOR_ID_QCA9377, SDIO_DEVICE_ID_QCA9377,
add_quirk, MMC_QUIRK_QCA9377_SETTINGS),
END_FIXUP
};
static inline void mmc_fixup_device(struct mmc_card *card,
const struct mmc_fixup *table)
{
const struct mmc_fixup *f;
u64 rev = cid_rev_card(card);
/* Non-core specific workarounds. */
if (!table)
table = mmc_fixup_methods;
for (f = table; f->vendor_fixup; f++) {
if ((f->manfid == CID_MANFID_ANY ||
f->manfid == card->cid.manfid) &&

View File

@ -1,116 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2017-2019, The Linux Foundation. All rights reserved.
*/
#include <linux/mmc/ring_buffer.h>
#include <linux/mmc/host.h>
#include <linux/seq_file.h>
void mmc_stop_tracing(struct mmc_host *mmc)
{
mmc->trace_buf.stop_tracing = true;
}
void mmc_trace_write(struct mmc_host *mmc,
const char *fmt, ...)
{
unsigned int idx;
va_list args;
char *event;
unsigned long flags;
char str[MMC_TRACE_EVENT_SZ];
if (unlikely(!mmc->trace_buf.data) ||
unlikely(mmc->trace_buf.stop_tracing))
return;
/*
* Here an increment and modulus is used to keep
* index within array bounds. The cast to unsigned is
* necessary so increment and rolover wraps to 0 correctly
*/
spin_lock_irqsave(&mmc->trace_buf.trace_lock, flags);
mmc->trace_buf.wr_idx += 1;
idx = ((unsigned int)mmc->trace_buf.wr_idx) &
(MMC_TRACE_RBUF_NUM_EVENTS - 1);
spin_unlock_irqrestore(&mmc->trace_buf.trace_lock, flags);
/* Catch some unlikely machine specific wrap-around bug */
if (unlikely(idx > (MMC_TRACE_RBUF_NUM_EVENTS - 1))) {
pr_err("%s: %s: Invalid idx:%d for mmc trace, tracing stopped !\n",
mmc_hostname(mmc), __func__, idx);
mmc_stop_tracing(mmc);
return;
}
event = &mmc->trace_buf.data[idx * MMC_TRACE_EVENT_SZ];
va_start(args, fmt);
snprintf(str, MMC_TRACE_EVENT_SZ, "<%d> %lld: %s: %s",
raw_smp_processor_id(),
ktime_to_ns(ktime_get()),
mmc_hostname(mmc), fmt);
memset(event, '\0', MMC_TRACE_EVENT_SZ);
vscnprintf(event, MMC_TRACE_EVENT_SZ, str, args);
va_end(args);
}
void mmc_trace_init(struct mmc_host *mmc)
{
BUILD_BUG_ON_NOT_POWER_OF_2(MMC_TRACE_RBUF_NUM_EVENTS);
mmc->trace_buf.data = (char *)
__get_free_pages(GFP_KERNEL|__GFP_ZERO,
MMC_TRACE_RBUF_SZ_ORDER);
if (!mmc->trace_buf.data) {
pr_err("%s: %s: Unable to allocate trace for mmc\n",
__func__, mmc_hostname(mmc));
return;
}
spin_lock_init(&mmc->trace_buf.trace_lock);
mmc->trace_buf.wr_idx = -1;
}
void mmc_trace_free(struct mmc_host *mmc)
{
if (mmc->trace_buf.data)
free_pages((unsigned long)mmc->trace_buf.data,
MMC_TRACE_RBUF_SZ_ORDER);
}
void mmc_dump_trace_buffer(struct mmc_host *mmc, struct seq_file *s)
{
unsigned int idx, cur_idx;
unsigned int N = MMC_TRACE_RBUF_NUM_EVENTS - 1;
char *event;
unsigned long flags;
if (!mmc->trace_buf.data)
return;
spin_lock_irqsave(&mmc->trace_buf.trace_lock, flags);
idx = ((unsigned int)mmc->trace_buf.wr_idx) & N;
cur_idx = (idx + 1) & N;
do {
event = &mmc->trace_buf.data[cur_idx * MMC_TRACE_EVENT_SZ];
if (s)
seq_printf(s, "%s", (char *)event);
else
pr_err("%s\n", (char *)event);
cur_idx = (cur_idx + 1) & N;
if (cur_idx == idx) {
event =
&mmc->trace_buf.data[cur_idx * MMC_TRACE_EVENT_SZ];
if (s)
seq_printf(s, "latest_event: %s",
(char *)event);
else
pr_err("latest_event: %s\n", (char *)event);
break;
}
} while (1);
spin_unlock_irqrestore(&mmc->trace_buf.trace_lock, flags);
}

View File

@ -29,12 +29,6 @@
#include "sd.h"
#include "sd_ops.h"
#define UHS_SDR104_MIN_DTR (100 * 1000 * 1000)
#define UHS_DDR50_MIN_DTR (50 * 1000 * 1000)
#define UHS_SDR50_MIN_DTR (50 * 1000 * 1000)
#define UHS_SDR25_MIN_DTR (25 * 1000 * 1000)
#define UHS_SDR12_MIN_DTR (12.5 * 1000 * 1000)
static const unsigned int tran_exp[] = {
10000, 100000, 1000000, 10000000,
0, 0, 0, 0
@ -367,9 +361,9 @@ int mmc_sd_switch_hs(struct mmc_card *card)
goto out;
if ((status[16] & 0xF) != 1) {
pr_warn("%s: Problem switching card into high-speed mode!, status:%x\n",
mmc_hostname(card->host), (status[16] & 0xF));
err = -EBUSY;
pr_warn("%s: Problem switching card into high-speed mode!\n",
mmc_hostname(card->host));
err = 0;
} else {
err = 1;
}
@ -423,22 +417,18 @@ static void sd_update_bus_speed_mode(struct mmc_card *card)
}
if ((card->host->caps & MMC_CAP_UHS_SDR104) &&
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_SDR104) &&
(card->host->f_max > UHS_SDR104_MIN_DTR)) {
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_SDR104)) {
card->sd_bus_speed = UHS_SDR104_BUS_SPEED;
} else if ((card->host->caps & (MMC_CAP_UHS_SDR104 |
MMC_CAP_UHS_SDR50)) && (card->sw_caps.sd3_bus_mode &
SD_MODE_UHS_SDR50) &&
(card->host->f_max > UHS_SDR50_MIN_DTR)) {
card->sd_bus_speed = UHS_SDR50_BUS_SPEED;
} else if ((card->host->caps & MMC_CAP_UHS_DDR50) &&
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_DDR50) &&
(card->host->f_max > UHS_DDR50_MIN_DTR)) {
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_DDR50)) {
card->sd_bus_speed = UHS_DDR50_BUS_SPEED;
} else if ((card->host->caps & (MMC_CAP_UHS_SDR104 |
MMC_CAP_UHS_SDR50)) && (card->sw_caps.sd3_bus_mode &
SD_MODE_UHS_SDR50)) {
card->sd_bus_speed = UHS_SDR50_BUS_SPEED;
} else if ((card->host->caps & (MMC_CAP_UHS_SDR104 |
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR25)) &&
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_SDR25) &&
(card->host->f_max > UHS_SDR25_MIN_DTR)) {
(card->sw_caps.sd3_bus_mode & SD_MODE_UHS_SDR25)) {
card->sd_bus_speed = UHS_SDR25_BUS_SPEED;
} else if ((card->host->caps & (MMC_CAP_UHS_SDR104 |
MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_SDR25 |
@ -482,17 +472,15 @@ static int sd_set_bus_speed_mode(struct mmc_card *card, u8 *status)
if (err)
return err;
if ((status[16] & 0xF) != card->sd_bus_speed) {
pr_warn("%s: Problem setting bus speed mode(%u)! max_dtr:%u, timing:%u, status:%x\n",
mmc_hostname(card->host), card->sd_bus_speed,
card->sw_caps.uhs_max_dtr, timing, (status[16] & 0xF));
err = -EBUSY;
} else {
if ((status[16] & 0xF) != card->sd_bus_speed)
pr_warn("%s: Problem setting bus speed mode!\n",
mmc_hostname(card->host));
else {
mmc_set_timing(card->host, timing);
mmc_set_clock(card->host, card->sw_caps.uhs_max_dtr);
}
return err;
return 0;
}
/* Get host's max current setting at its current voltage */
@ -584,64 +572,6 @@ static int sd_set_current_limit(struct mmc_card *card, u8 *status)
return 0;
}
/**
* mmc_sd_change_bus_speed() - Change SD card bus frequency at runtime
* @host: pointer to mmc host structure
* @freq: pointer to desired frequency to be set
*
* Change the SD card bus frequency at runtime after the card is
* initialized. Callers are expected to make sure of the card's
* state (DATA/RCV/TRANSFER) beforing changing the frequency at runtime.
*
* If the frequency to change is greater than max. supported by card,
* *freq is changed to max. supported by card and if it is less than min.
* supported by host, *freq is changed to min. supported by host.
*/
static int mmc_sd_change_bus_speed(struct mmc_host *host, unsigned long *freq)
{
int err = 0;
struct mmc_card *card;
mmc_claim_host(host);
/*
* Assign card pointer after claiming host to avoid race
* conditions that may arise during removal of the card.
*/
card = host->card;
/* sanity checks */
if (!card || !freq) {
err = -EINVAL;
goto out;
}
mmc_set_clock(host, (unsigned int) (*freq));
if (!mmc_host_is_spi(card->host) && mmc_card_uhs(card)
&& card->host->ops->execute_tuning) {
/*
* We try to probe host driver for tuning for any
* frequency, it is host driver responsibility to
* perform actual tuning only when required.
*/
mmc_host_clk_hold(card->host);
err = card->host->ops->execute_tuning(card->host,
MMC_SEND_TUNING_BLOCK);
mmc_host_clk_release(card->host);
if (err) {
pr_warn("%s: %s: tuning execution failed %d. Restoring to previous clock %lu\n",
mmc_hostname(card->host), __func__, err,
host->clk_scaling.curr_freq);
mmc_set_clock(host, host->clk_scaling.curr_freq);
}
}
out:
mmc_release_host(host);
return err;
}
/*
* UHS-I specific initialization procedure
*/
@ -889,9 +819,7 @@ static int mmc_sd_get_ro(struct mmc_host *host)
if (!host->ops->get_ro)
return -1;
mmc_host_clk_hold(host);
ro = host->ops->get_ro(host);
mmc_host_clk_release(host);
return ro;
}
@ -964,10 +892,7 @@ unsigned mmc_sd_get_max_clock(struct mmc_card *card)
{
unsigned max_dtr = (unsigned int)-1;
if (mmc_card_uhs(card)) {
if (max_dtr > card->sw_caps.uhs_max_dtr)
max_dtr = card->sw_caps.uhs_max_dtr;
} else if (mmc_card_hs(card)) {
if (mmc_card_hs(card)) {
if (max_dtr > card->sw_caps.hs_max_dtr)
max_dtr = card->sw_caps.hs_max_dtr;
} else if (max_dtr > card->csd.max_dtr) {
@ -1041,7 +966,6 @@ retry:
err = mmc_send_relative_addr(host, &card->rca);
if (err)
goto free_card;
host->card = card;
}
if (!oldcard) {
@ -1143,16 +1067,12 @@ retry:
goto free_card;
}
done:
card->clk_scaling_highest = mmc_sd_get_max_clock(card);
card->clk_scaling_lowest = host->f_min;
host->card = card;
return 0;
free_card:
if (!oldcard) {
host->card = NULL;
if (!oldcard)
mmc_remove_card(card);
}
return err;
}
@ -1162,12 +1082,8 @@ free_card:
*/
static void mmc_sd_remove(struct mmc_host *host)
{
mmc_exit_clk_scaling(host);
mmc_remove_card(host->card);
mmc_claim_host(host);
host->card = NULL;
mmc_release_host(host);
}
/*
@ -1185,18 +1101,6 @@ static void mmc_sd_detect(struct mmc_host *host)
{
int err;
/*
* Try to acquire claim host. If failed to get the lock in 2 sec,
* just return; This is to ensure that when this call is invoked
* due to pm_suspend, not to block suspend for longer duration.
*/
pm_runtime_get_sync(&host->card->dev);
if (!mmc_try_claim_host(host, 2000)) {
pm_runtime_mark_last_busy(&host->card->dev);
pm_runtime_put_autosuspend(&host->card->dev);
return;
}
mmc_get_card(host->card, NULL);
/*
@ -1220,13 +1124,6 @@ static int _mmc_sd_suspend(struct mmc_host *host)
{
int err = 0;
err = mmc_suspend_clk_scaling(host);
if (err) {
pr_err("%s: %s: fail to suspend clock scaling (%d)\n",
mmc_hostname(host), __func__, err);
return err;
}
mmc_claim_host(host);
if (mmc_card_suspended(host->card))
@ -1252,16 +1149,11 @@ static int mmc_sd_suspend(struct mmc_host *host)
{
int err;
MMC_TRACE(host, "%s: Enter\n", __func__);
err = _mmc_sd_suspend(host);
if (!err) {
pm_runtime_disable(&host->card->dev);
pm_runtime_set_suspended(&host->card->dev);
/* if suspend fails, force mmc_detect_change during resume */
} else if (mmc_bus_manual_resume(host))
host->ignore_bus_resume_flags = true;
MMC_TRACE(host, "%s: Exit err: %d\n", __func__, err);
}
return err;
}
@ -1281,23 +1173,8 @@ static int _mmc_sd_resume(struct mmc_host *host)
mmc_power_up(host, host->card->ocr);
err = mmc_sd_init_card(host, host->card->ocr, host->card);
if (err == -ENOENT) {
pr_debug("%s: %s: found a different card(%d), do detect change\n",
mmc_hostname(host), __func__, err);
mmc_card_set_removed(host->card);
mmc_detect_change(host, msecs_to_jiffies(200));
} else if (err) {
goto out;
}
mmc_card_clr_suspended(host->card);
err = mmc_resume_clk_scaling(host);
if (err) {
pr_err("%s: %s: fail to resume clock scaling (%d)\n",
mmc_hostname(host), __func__, err);
goto out;
}
out:
mmc_release_host(host);
return err;
@ -1308,16 +1185,8 @@ out:
*/
static int mmc_sd_resume(struct mmc_host *host)
{
int err = 0;
MMC_TRACE(host, "%s: Enter\n", __func__);
err = _mmc_sd_resume(host);
pm_runtime_set_active(&host->card->dev);
pm_runtime_mark_last_busy(&host->card->dev);
pm_runtime_enable(&host->card->dev);
MMC_TRACE(host, "%s: Exit err: %d\n", __func__, err);
return err;
return 0;
}
/*
@ -1368,7 +1237,6 @@ static const struct mmc_bus_ops mmc_sd_ops = {
.resume = mmc_sd_resume,
.alive = mmc_sd_alive,
.shutdown = mmc_sd_suspend,
.change_bus_speed = mmc_sd_change_bus_speed,
.hw_reset = mmc_sd_hw_reset,
};
@ -1424,13 +1292,6 @@ int mmc_attach_sd(struct mmc_host *host)
goto remove_card;
mmc_claim_host(host);
err = mmc_init_clk_scaling(host);
if (err) {
mmc_release_host(host);
goto remove_card;
}
return 0;
remove_card:

View File

@ -184,23 +184,6 @@ static int sdio_read_cccr(struct mmc_card *card, u32 ocr)
card->sw_caps.sd3_drv_type |= SD_DRIVER_TYPE_C;
if (data & SDIO_DRIVE_SDTD)
card->sw_caps.sd3_drv_type |= SD_DRIVER_TYPE_D;
ret = mmc_io_rw_direct(card, 0, 0,
SDIO_CCCR_INTERRUPT_EXTENSION, 0, &data);
if (ret)
goto out;
if (data & SDIO_SUPPORT_ASYNC_INTR) {
if (card->host->caps2 &
MMC_CAP2_ASYNC_SDIO_IRQ_4BIT_MODE) {
data |= SDIO_ENABLE_ASYNC_INTR;
ret = mmc_io_rw_direct(card, 1, 0,
SDIO_CCCR_INTERRUPT_EXTENSION,
data, NULL);
if (ret)
goto out;
card->cccr.async_intr_sup = 1;
}
}
}
/* if no uhs mode ensure we check for high speed */
@ -219,60 +202,12 @@ out:
return ret;
}
static void sdio_enable_vendor_specific_settings(struct mmc_card *card)
{
int ret;
u8 settings;
if (mmc_enable_qca6574_settings(card) ||
mmc_enable_qca9377_settings(card)) {
ret = mmc_io_rw_direct(card, 1, 0, 0xF2, 0x0F, NULL);
if (ret) {
pr_crit("%s: failed to write to fn 0xf2 %d\n",
mmc_hostname(card->host), ret);
goto out;
}
ret = mmc_io_rw_direct(card, 0, 0, 0xF1, 0, &settings);
if (ret) {
pr_crit("%s: failed to read fn 0xf1 %d\n",
mmc_hostname(card->host), ret);
goto out;
}
settings |= 0x80;
ret = mmc_io_rw_direct(card, 1, 0, 0xF1, settings, NULL);
if (ret) {
pr_crit("%s: failed to write to fn 0xf1 %d\n",
mmc_hostname(card->host), ret);
goto out;
}
ret = mmc_io_rw_direct(card, 0, 0, 0xF0, 0, &settings);
if (ret) {
pr_crit("%s: failed to read fn 0xf0 %d\n",
mmc_hostname(card->host), ret);
goto out;
}
settings |= 0x20;
ret = mmc_io_rw_direct(card, 1, 0, 0xF0, settings, NULL);
if (ret) {
pr_crit("%s: failed to write to fn 0xf0 %d\n",
mmc_hostname(card->host), ret);
goto out;
}
}
out:
return;
}
static int sdio_enable_wide(struct mmc_card *card)
{
int ret;
u8 ctrl;
if (!(card->host->caps & (MMC_CAP_4_BIT_DATA | MMC_CAP_8_BIT_DATA)))
if (!(card->host->caps & MMC_CAP_4_BIT_DATA))
return 0;
if (card->cccr.low_speed && !card->cccr.wide_bus)
@ -288,10 +223,7 @@ static int sdio_enable_wide(struct mmc_card *card)
/* set as 4-bit bus width */
ctrl &= ~SDIO_BUS_WIDTH_MASK;
if (card->host->caps & MMC_CAP_8_BIT_DATA)
ctrl |= SDIO_BUS_WIDTH_8BIT;
else if (card->host->caps & MMC_CAP_4_BIT_DATA)
ctrl |= SDIO_BUS_WIDTH_4BIT;
ctrl |= SDIO_BUS_WIDTH_4BIT;
ret = mmc_io_rw_direct(card, 1, 0, SDIO_CCCR_IF, ctrl, NULL);
if (ret)
@ -332,7 +264,7 @@ static int sdio_disable_wide(struct mmc_card *card)
int ret;
u8 ctrl;
if (!(card->host->caps & (MMC_CAP_4_BIT_DATA | MMC_CAP_8_BIT_DATA)))
if (!(card->host->caps & MMC_CAP_4_BIT_DATA))
return 0;
if (card->cccr.low_speed && !card->cccr.wide_bus)
@ -342,10 +274,10 @@ static int sdio_disable_wide(struct mmc_card *card)
if (ret)
return ret;
if (!(ctrl & (SDIO_BUS_WIDTH_4BIT | SDIO_BUS_WIDTH_8BIT)))
if (!(ctrl & SDIO_BUS_WIDTH_4BIT))
return 0;
ctrl &= ~(SDIO_BUS_WIDTH_4BIT | SDIO_BUS_WIDTH_8BIT);
ctrl &= ~SDIO_BUS_WIDTH_4BIT;
ctrl |= SDIO_BUS_ASYNC_INT;
ret = mmc_io_rw_direct(card, 1, 0, SDIO_CCCR_IF, ctrl, NULL);
@ -563,9 +495,6 @@ static int sdio_set_bus_speed_mode(struct mmc_card *card)
if (err)
return err;
/* Vendor specific settings based on card quirks */
sdio_enable_vendor_specific_settings(card);
speed &= ~SDIO_SPEED_BSS_MASK;
speed |= bus_speed;
err = mmc_io_rw_direct(card, 1, 0, SDIO_CCCR_SPEED, speed, NULL);
@ -687,7 +616,7 @@ try_again:
if (oldcard && (oldcard->type != MMC_TYPE_SD_COMBO ||
memcmp(card->raw_cid, oldcard->raw_cid,
sizeof(card->raw_cid)) != 0)) {
sizeof(card->raw_cid)) != 0)) {
mmc_remove_card(card);
return -ENOENT;
}
@ -703,11 +632,8 @@ try_again:
/*
* Call the optional HC's init_card function to handle quirks.
*/
if (host->ops->init_card) {
mmc_host_clk_hold(host);
if (host->ops->init_card)
host->ops->init_card(host, card);
mmc_host_clk_release(host);
}
/*
* If the host and card support UHS-I mode request the card
@ -864,12 +790,7 @@ try_again:
* Switch to wider bus (if supported).
*/
err = sdio_enable_4bit_bus(card);
if (err > 0) {
if (card->host->caps & MMC_CAP_8_BIT_DATA)
mmc_set_bus_width(card->host, MMC_BUS_WIDTH_8);
else if (card->host->caps & MMC_CAP_4_BIT_DATA)
mmc_set_bus_width(card->host, MMC_BUS_WIDTH_4);
} else if (err)
if (err)
goto remove;
}
@ -1014,7 +935,6 @@ static int mmc_sdio_pre_suspend(struct mmc_host *host)
*/
static int mmc_sdio_suspend(struct mmc_host *host)
{
MMC_TRACE(host, "%s: Enter\n", __func__);
mmc_claim_host(host);
if (mmc_card_keep_power(host) && mmc_card_wake_sdio_irq(host))
@ -1022,15 +942,13 @@ static int mmc_sdio_suspend(struct mmc_host *host)
if (!mmc_card_keep_power(host)) {
mmc_power_off(host);
} else if (host->ios.clock) {
mmc_gate_clock(host);
} else if (host->retune_period) {
mmc_retune_timer_stop(host);
mmc_retune_needed(host);
}
mmc_release_host(host);
MMC_TRACE(host, "%s: Exit\n", __func__);
return 0;
}
@ -1038,7 +956,6 @@ static int mmc_sdio_resume(struct mmc_host *host)
{
int err = 0;
MMC_TRACE(host, "%s: Enter\n", __func__);
/* Basic card reinitialization. */
mmc_claim_host(host);
@ -1064,30 +981,18 @@ static int mmc_sdio_resume(struct mmc_host *host)
} else if (mmc_card_keep_power(host) && mmc_card_wake_sdio_irq(host)) {
/* We may have switched to 1-bit mode during suspend */
err = sdio_enable_4bit_bus(host->card);
if (err > 0) {
if (host->caps & MMC_CAP_8_BIT_DATA)
mmc_set_bus_width(host, MMC_BUS_WIDTH_8);
else if (host->caps & MMC_CAP_4_BIT_DATA)
mmc_set_bus_width(host, MMC_BUS_WIDTH_4);
err = 0;
}
}
if (!err && host->sdio_irqs) {
if (!(host->caps2 & MMC_CAP2_SDIO_IRQ_NOTHREAD)) {
if (!(host->caps2 & MMC_CAP2_SDIO_IRQ_NOTHREAD))
wake_up_process(host->sdio_irq_thread);
} else if (host->caps & MMC_CAP_SDIO_IRQ) {
mmc_host_clk_hold(host);
else if (host->caps & MMC_CAP_SDIO_IRQ)
host->ops->enable_sdio_irq(host, 1);
mmc_host_clk_release(host);
}
}
mmc_release_host(host);
host->pm_flags &= ~MMC_PM_KEEP_POWER;
host->pm_flags &= ~MMC_PM_WAKE_SDIO_IRQ;
MMC_TRACE(host, "%s: Exit err: %d\n", __func__, err);
return err;
}

View File

@ -277,16 +277,8 @@ static int sdio_read_cis(struct mmc_card *card, struct sdio_func *func)
break;
/* null entries have no link field or data */
if (tpl_code == 0x00) {
if (card->cis.vendor == 0x70 &&
(card->cis.device == 0x2460 ||
card->cis.device == 0x0460 ||
card->cis.device == 0x23F1 ||
card->cis.device == 0x23F0))
break;
if (tpl_code == 0x00)
continue;
}
ret = mmc_io_rw_direct(card, 0, 0, ptr++, 0, &tpl_link);
if (ret)

View File

@ -97,9 +97,7 @@ void sdio_run_irqs(struct mmc_host *host)
mmc_claim_host(host);
if (host->sdio_irqs) {
host->sdio_irq_pending = true;
mmc_host_clk_hold(host);
process_sdio_pending_irqs(host);
mmc_host_clk_release(host);
if (host->ops->ack_sdio_irq)
host->ops->ack_sdio_irq(host);
}
@ -127,7 +125,6 @@ static int sdio_irq_thread(void *_host)
struct sched_param param = { .sched_priority = 1 };
unsigned long period, idle_period;
int ret;
bool ws;
sched_setscheduler(current, SCHED_FIFO, &param);
@ -162,17 +159,6 @@ static int sdio_irq_thread(void *_host)
&host->sdio_irq_thread_abort);
if (ret)
break;
ws = false;
/*
* prevent suspend if it has started when scheduled;
* 100 msec (approx. value) should be enough for the system to
* resume and attend to the card's request
*/
if ((host->dev_status == DEV_SUSPENDING) ||
(host->dev_status == DEV_SUSPENDED)) {
pm_wakeup_event(&host->card->dev, 100);
ws = true;
}
ret = process_sdio_pending_irqs(host);
host->sdio_irq_pending = false;
mmc_release_host(host);
@ -204,27 +190,15 @@ static int sdio_irq_thread(void *_host)
}
set_current_state(TASK_INTERRUPTIBLE);
if (host->caps & MMC_CAP_SDIO_IRQ) {
mmc_host_clk_hold(host);
if (host->caps & MMC_CAP_SDIO_IRQ)
host->ops->enable_sdio_irq(host, 1);
mmc_host_clk_release(host);
}
/*
* function drivers would have processed the event from card
* unless suspended, hence release wake source
*/
if (ws && (host->dev_status == DEV_RESUMED))
pm_relax(&host->card->dev);
if (!kthread_should_stop())
schedule_timeout(period);
set_current_state(TASK_RUNNING);
} while (!kthread_should_stop());
if (host->caps & MMC_CAP_SDIO_IRQ) {
mmc_host_clk_hold(host);
if (host->caps & MMC_CAP_SDIO_IRQ)
host->ops->enable_sdio_irq(host, 0);
mmc_host_clk_release(host);
}
pr_debug("%s: IRQ thread exiting with code %d\n",
mmc_hostname(host), ret);
@ -250,9 +224,7 @@ static int sdio_card_irq_get(struct mmc_card *card)
return err;
}
} else if (host->caps & MMC_CAP_SDIO_IRQ) {
mmc_host_clk_hold(host);
host->ops->enable_sdio_irq(host, 1);
mmc_host_clk_release(host);
}
}
@ -273,9 +245,7 @@ static int sdio_card_irq_put(struct mmc_card *card)
atomic_set(&host->sdio_irq_thread_abort, 1);
kthread_stop(host->sdio_irq_thread);
} else if (host->caps & MMC_CAP_SDIO_IRQ) {
mmc_host_clk_hold(host);
host->ops->enable_sdio_irq(host, 0);
mmc_host_clk_release(host);
}
}

View File

@ -17,7 +17,6 @@
#include <linux/mmc/slot-gpio.h>
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/extcon.h>
#include "slot-gpio.h"
@ -65,15 +64,6 @@ int mmc_gpio_alloc(struct mmc_host *host)
int mmc_gpio_get_ro(struct mmc_host *host)
{
struct mmc_gpio *ctx = host->slot.handler_priv;
int ret;
if (host->extcon) {
ret = extcon_get_state(host->extcon, EXTCON_MECHANICAL);
if (ret < 0)
dev_err(mmc_dev(host), "%s: Extcon failed to check card state, ret=%d\n",
__func__, ret);
return ret;
}
if (!ctx || !ctx->ro_gpio)
return -ENOSYS;
@ -193,53 +183,6 @@ int mmc_gpio_set_cd_wake(struct mmc_host *host, bool on)
}
EXPORT_SYMBOL(mmc_gpio_set_cd_wake);
static int mmc_card_detect_notifier(struct notifier_block *nb,
unsigned long event, void *ptr)
{
struct mmc_host *host = container_of(nb, struct mmc_host,
card_detect_nb);
host->trigger_card_event = true;
mmc_detect_change(host, 0);
return NOTIFY_DONE;
}
void mmc_register_extcon(struct mmc_host *host)
{
struct extcon_dev *extcon = host->extcon;
int err;
if (!extcon)
return;
host->card_detect_nb.notifier_call = mmc_card_detect_notifier;
err = extcon_register_notifier(extcon, EXTCON_MECHANICAL,
&host->card_detect_nb);
if (err) {
dev_err(mmc_dev(host), "%s: extcon_register_notifier() failed ret=%d\n",
__func__, err);
host->caps |= MMC_CAP_NEEDS_POLL;
}
}
EXPORT_SYMBOL(mmc_register_extcon);
void mmc_unregister_extcon(struct mmc_host *host)
{
struct extcon_dev *extcon = host->extcon;
int err;
if (!extcon)
return;
err = extcon_unregister_notifier(extcon, EXTCON_MECHANICAL,
&host->card_detect_nb);
if (err)
dev_err(mmc_dev(host), "%s: extcon_unregister_notifier() failed ret=%d\n",
__func__, err);
}
EXPORT_SYMBOL(mmc_unregister_extcon);
/* Register an alternate interrupt service routine for
* the card-detect GPIO.
*/

View File

@ -451,7 +451,7 @@ config MMC_ATMELMCI
config MMC_SDHCI_MSM
tristate "Qualcomm Technologies, Inc. SDHCI Controller Support"
depends on ARCH_QCOM || ARCH_MSM || (ARM && COMPILE_TEST)
depends on ARCH_QCOM || (ARM && COMPILE_TEST)
depends on MMC_SDHCI_PLTFM
select MMC_SDHCI_IO_ACCESSORS
help

View File

@ -85,8 +85,6 @@ obj-$(CONFIG_MMC_SDHCI_OF_ESDHC) += sdhci-of-esdhc.o
obj-$(CONFIG_MMC_SDHCI_OF_HLWD) += sdhci-of-hlwd.o
obj-$(CONFIG_MMC_SDHCI_OF_DWCMSHC) += sdhci-of-dwcmshc.o
obj-$(CONFIG_MMC_SDHCI_BCM_KONA) += sdhci-bcm-kona.o
obj-$(CONFIG_MMC_SDHCI_MSM) += sdhci-msm.o
obj-$(CONFIG_MMC_SDHCI_MSM_ICE) += sdhci-msm-ice.o
obj-$(CONFIG_MMC_SDHCI_IPROC) += sdhci-iproc.o
obj-$(CONFIG_MMC_SDHCI_MSM) += sdhci-msm.o
obj-$(CONFIG_MMC_SDHCI_ST) += sdhci-st.o

View File

@ -1,579 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2015, 2017-2019, The Linux Foundation. All rights reserved.
*/
#include "sdhci-msm-ice.h"
static void sdhci_msm_ice_error_cb(void *host_ctrl, u32 error)
{
struct sdhci_msm_host *msm_host = (struct sdhci_msm_host *)host_ctrl;
dev_err(&msm_host->pdev->dev, "%s: Error in ice operation 0x%x\n",
__func__, error);
if (msm_host->ice.state == SDHCI_MSM_ICE_STATE_ACTIVE)
msm_host->ice.state = SDHCI_MSM_ICE_STATE_DISABLED;
}
static struct platform_device *sdhci_msm_ice_get_pdevice(struct device *dev)
{
struct device_node *node;
struct platform_device *ice_pdev = NULL;
node = of_parse_phandle(dev->of_node, SDHC_MSM_CRYPTO_LABEL, 0);
if (!node) {
dev_dbg(dev, "%s: sdhc-msm-crypto property not specified\n",
__func__);
goto out;
}
ice_pdev = qcom_ice_get_pdevice(node);
out:
return ice_pdev;
}
static
struct qcom_ice_variant_ops *sdhci_msm_ice_get_vops(struct device *dev)
{
struct qcom_ice_variant_ops *ice_vops = NULL;
struct device_node *node;
node = of_parse_phandle(dev->of_node, SDHC_MSM_CRYPTO_LABEL, 0);
if (!node) {
dev_dbg(dev, "%s: sdhc-msm-crypto property not specified\n",
__func__);
goto out;
}
ice_vops = qcom_ice_get_variant_ops(node);
of_node_put(node);
out:
return ice_vops;
}
static
void sdhci_msm_enable_ice_hci(struct sdhci_host *host, bool enable)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
u32 config = 0;
u32 ice_cap = 0;
/*
* Enable the cryptographic support inside SDHC.
* This is a global config which needs to be enabled
* all the time.
* Only when it it is enabled, the ICE_HCI capability
* will get reflected in CQCAP register.
*/
config = readl_relaxed(host->ioaddr + HC_VENDOR_SPECIFIC_FUNC4);
if (enable)
config &= ~DISABLE_CRYPTO;
else
config |= DISABLE_CRYPTO;
writel_relaxed(config, host->ioaddr + HC_VENDOR_SPECIFIC_FUNC4);
/*
* CQCAP register is in different register space from above
* ice global enable register. So a mb() is required to ensure
* above write gets completed before reading the CQCAP register.
*/
mb();
/*
* Check if ICE HCI capability support is present
* If present, enable it.
*/
ice_cap = readl_relaxed(msm_host->cryptoio + ICE_CQ_CAPABILITIES);
if (ice_cap & ICE_HCI_SUPPORT) {
config = readl_relaxed(msm_host->cryptoio + ICE_CQ_CONFIG);
if (enable)
config |= CRYPTO_GENERAL_ENABLE;
else
config &= ~CRYPTO_GENERAL_ENABLE;
writel_relaxed(config, msm_host->cryptoio + ICE_CQ_CONFIG);
}
}
int sdhci_msm_ice_get_dev(struct sdhci_host *host)
{
struct device *sdhc_dev;
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
if (!msm_host || !msm_host->pdev) {
pr_err("%s: invalid msm_host %p or msm_host->pdev\n",
__func__, msm_host);
return -EINVAL;
}
sdhc_dev = &msm_host->pdev->dev;
msm_host->ice.vops = sdhci_msm_ice_get_vops(sdhc_dev);
msm_host->ice.pdev = sdhci_msm_ice_get_pdevice(sdhc_dev);
if (msm_host->ice.pdev == ERR_PTR(-EPROBE_DEFER)) {
dev_err(sdhc_dev, "%s: ICE device not probed yet\n",
__func__);
msm_host->ice.pdev = NULL;
msm_host->ice.vops = NULL;
return -EPROBE_DEFER;
}
if (!msm_host->ice.pdev) {
dev_dbg(sdhc_dev, "%s: invalid platform device\n", __func__);
msm_host->ice.vops = NULL;
return -ENODEV;
}
if (!msm_host->ice.vops) {
dev_dbg(sdhc_dev, "%s: invalid ice vops\n", __func__);
msm_host->ice.pdev = NULL;
return -ENODEV;
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_DISABLED;
return 0;
}
static
int sdhci_msm_ice_pltfm_init(struct sdhci_msm_host *msm_host)
{
struct resource *ice_memres = NULL;
struct platform_device *pdev = msm_host->pdev;
int err = 0;
if (!msm_host->ice_hci_support)
goto out;
/*
* ICE HCI registers are present in cmdq register space.
* So map the cmdq mem for accessing ICE HCI registers.
*/
ice_memres = platform_get_resource_byname(pdev,
IORESOURCE_MEM, "cmdq_mem");
if (!ice_memres) {
dev_err(&pdev->dev, "Failed to get iomem resource for ice\n");
err = -EINVAL;
goto out;
}
msm_host->cryptoio = devm_ioremap(&pdev->dev,
ice_memres->start,
resource_size(ice_memres));
if (!msm_host->cryptoio) {
dev_err(&pdev->dev, "Failed to remap registers\n");
err = -ENOMEM;
}
out:
return err;
}
int sdhci_msm_ice_init(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.vops->init) {
err = sdhci_msm_ice_pltfm_init(msm_host);
if (err)
goto out;
if (msm_host->ice_hci_support)
sdhci_msm_enable_ice_hci(host, true);
err = msm_host->ice.vops->init(msm_host->ice.pdev,
msm_host,
sdhci_msm_ice_error_cb);
if (err) {
pr_err("%s: ice init err %d\n",
mmc_hostname(host->mmc), err);
sdhci_msm_ice_print_regs(host);
if (msm_host->ice_hci_support)
sdhci_msm_enable_ice_hci(host, false);
goto out;
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_ACTIVE;
}
out:
return err;
}
void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot)
{
writel_relaxed(SDHCI_MSM_ICE_ENABLE_BYPASS,
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n + 16 * slot);
}
static
int sdhci_msm_ice_get_cfg(struct sdhci_msm_host *msm_host, struct request *req,
unsigned int *bypass, short *key_index)
{
int err = 0;
struct ice_data_setting ice_set;
memset(&ice_set, 0, sizeof(struct ice_data_setting));
if (msm_host->ice.vops->config_start) {
err = msm_host->ice.vops->config_start(
msm_host->ice.pdev,
req, &ice_set, false);
if (err) {
pr_err("%s: ice config failed %d\n",
mmc_hostname(msm_host->mmc), err);
return err;
}
}
/* if writing data command */
if (rq_data_dir(req) == WRITE)
*bypass = ice_set.encr_bypass ?
SDHCI_MSM_ICE_ENABLE_BYPASS :
SDHCI_MSM_ICE_DISABLE_BYPASS;
/* if reading data command */
else if (rq_data_dir(req) == READ)
*bypass = ice_set.decr_bypass ?
SDHCI_MSM_ICE_ENABLE_BYPASS :
SDHCI_MSM_ICE_DISABLE_BYPASS;
*key_index = ice_set.crypto_data.key_index;
return err;
}
static
void sdhci_msm_ice_update_cfg(struct sdhci_host *host, u64 lba, u32 slot,
unsigned int bypass, short key_index, u32 cdu_sz)
{
unsigned int ctrl_info_val = 0;
/* Configure ICE index */
ctrl_info_val =
(key_index &
MASK_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX)
<< OFFSET_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX;
/* Configure data unit size of transfer request */
ctrl_info_val |=
(cdu_sz &
MASK_SDHCI_MSM_ICE_CTRL_INFO_CDU)
<< OFFSET_SDHCI_MSM_ICE_CTRL_INFO_CDU;
/* Configure ICE bypass mode */
ctrl_info_val |=
(bypass & MASK_SDHCI_MSM_ICE_CTRL_INFO_BYPASS)
<< OFFSET_SDHCI_MSM_ICE_CTRL_INFO_BYPASS;
writel_relaxed((lba & 0xFFFFFFFF),
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_1_n + 16 * slot);
writel_relaxed(((lba >> 32) & 0xFFFFFFFF),
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_2_n + 16 * slot);
writel_relaxed(ctrl_info_val,
host->ioaddr + CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n + 16 * slot);
/* Ensure ICE registers are configured before issuing SDHCI request */
mb();
}
static inline
void sdhci_msm_ice_hci_update_cmdq_cfg(u64 dun, unsigned int bypass,
short key_index, u64 *ice_ctx)
{
/*
* The naming convention got changed between ICE2.0 and ICE3.0
* registers fields. Below is the equivalent names for
* ICE3.0 Vs ICE2.0:
* Data Unit Number(DUN) == Logical Base address(LBA)
* Crypto Configuration index (CCI) == Key Index
* Crypto Enable (CE) == !BYPASS
*/
if (ice_ctx)
*ice_ctx = DATA_UNIT_NUM(dun) |
CRYPTO_CONFIG_INDEX(key_index) |
CRYPTO_ENABLE(!bypass);
}
static
void sdhci_msm_ice_hci_update_noncq_cfg(struct sdhci_host *host,
u64 dun, unsigned int bypass, short key_index)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
unsigned int crypto_params = 0;
/*
* The naming convention got changed between ICE2.0 and ICE3.0
* registers fields. Below is the equivalent names for
* ICE3.0 Vs ICE2.0:
* Data Unit Number(DUN) == Logical Base address(LBA)
* Crypto Configuration index (CCI) == Key Index
* Crypto Enable (CE) == !BYPASS
*/
/* Configure ICE bypass mode */
crypto_params |=
((!bypass) & MASK_SDHCI_MSM_ICE_HCI_PARAM_CE)
<< OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CE;
/* Configure Crypto Configure Index (CCI) */
crypto_params |= (key_index &
MASK_SDHCI_MSM_ICE_HCI_PARAM_CCI)
<< OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CCI;
writel_relaxed((crypto_params & 0xFFFFFFFF),
msm_host->cryptoio + ICE_NONCQ_CRYPTO_PARAMS);
/* Update DUN */
writel_relaxed((dun & 0xFFFFFFFF),
msm_host->cryptoio + ICE_NONCQ_CRYPTO_DUN);
/* Ensure ICE registers are configured before issuing SDHCI request */
mb();
}
int sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq,
u32 slot)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
short key_index = 0;
u64 dun = 0;
unsigned int bypass = SDHCI_MSM_ICE_ENABLE_BYPASS;
u32 cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_512_B;
struct request *req;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
WARN_ON(!mrq);
if (!mrq)
return -EINVAL;
req = mrq->req;
if (req && req->bio) {
#ifdef CONFIG_PFK
if (bio_dun(req->bio)) {
dun = bio_dun(req->bio);
cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB;
} else {
dun = req->__sector;
}
#else
dun = req->__sector;
#endif
err = sdhci_msm_ice_get_cfg(msm_host, req, &bypass, &key_index);
if (err)
return err;
pr_debug("%s: %s: slot %d bypass %d key_index %d\n",
mmc_hostname(host->mmc),
(rq_data_dir(req) == WRITE) ? "WRITE" : "READ",
slot, bypass, key_index);
}
if (msm_host->ice_hci_support) {
/* For ICE HCI / ICE3.0 */
sdhci_msm_ice_hci_update_noncq_cfg(host, dun, bypass,
key_index);
} else {
/* For ICE versions earlier to ICE3.0 */
sdhci_msm_ice_update_cfg(host, dun, slot, bypass, key_index,
cdu_sz);
}
return 0;
}
int sdhci_msm_ice_cmdq_cfg(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
short key_index = 0;
u64 dun = 0;
unsigned int bypass = SDHCI_MSM_ICE_ENABLE_BYPASS;
struct request *req;
u32 cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_512_B;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
WARN_ON(!mrq);
if (!mrq)
return -EINVAL;
req = mrq->req;
if (req && req->bio) {
#ifdef CONFIG_PFK
if (bio_dun(req->bio)) {
dun = bio_dun(req->bio);
cdu_sz = SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB;
} else {
dun = req->__sector;
}
#else
dun = req->__sector;
#endif
err = sdhci_msm_ice_get_cfg(msm_host, req, &bypass, &key_index);
if (err)
return err;
pr_debug("%s: %s: slot %d bypass %d key_index %d\n",
mmc_hostname(host->mmc),
(rq_data_dir(req) == WRITE) ? "WRITE" : "READ",
slot, bypass, key_index);
}
if (msm_host->ice_hci_support) {
/* For ICE HCI / ICE3.0 */
sdhci_msm_ice_hci_update_cmdq_cfg(dun, bypass, key_index,
ice_ctx);
} else {
/* For ICE versions earlier to ICE3.0 */
sdhci_msm_ice_update_cfg(host, dun, slot, bypass, key_index,
cdu_sz);
}
return 0;
}
int sdhci_msm_ice_cfg_end(struct sdhci_host *host, struct mmc_request *mrq)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
struct request *req;
if (!host->is_crypto_en)
return 0;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
req = mrq->req;
if (req) {
if (msm_host->ice.vops->config_end) {
err = msm_host->ice.vops->config_end(req);
if (err) {
pr_err("%s: ice config end failed %d\n",
mmc_hostname(host->mmc), err);
return err;
}
}
}
return 0;
}
int sdhci_msm_ice_reset(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state before reset %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->reset) {
err = msm_host->ice.vops->reset(msm_host->ice.pdev);
if (err) {
pr_err("%s: ice reset failed %d\n",
mmc_hostname(host->mmc), err);
sdhci_msm_ice_print_regs(host);
return err;
}
}
/* If ICE HCI support is present then re-enable it */
if (msm_host->ice_hci_support)
sdhci_msm_enable_ice_hci(host, true);
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state after reset %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
return 0;
}
int sdhci_msm_ice_resume(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.state !=
SDHCI_MSM_ICE_STATE_SUSPENDED) {
pr_err("%s: ice is in invalid state before resume %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->resume) {
err = msm_host->ice.vops->resume(msm_host->ice.pdev);
if (err) {
pr_err("%s: ice resume failed %d\n",
mmc_hostname(host->mmc), err);
return err;
}
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_ACTIVE;
return 0;
}
int sdhci_msm_ice_suspend(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int err = 0;
if (msm_host->ice.state !=
SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state before resume %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->suspend) {
err = msm_host->ice.vops->suspend(msm_host->ice.pdev);
if (err) {
pr_err("%s: ice suspend failed %d\n",
mmc_hostname(host->mmc), err);
return -EINVAL;
}
}
msm_host->ice.state = SDHCI_MSM_ICE_STATE_SUSPENDED;
return 0;
}
int sdhci_msm_ice_get_status(struct sdhci_host *host, int *ice_status)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
int stat = -EINVAL;
if (msm_host->ice.state != SDHCI_MSM_ICE_STATE_ACTIVE) {
pr_err("%s: ice is in invalid state %d\n",
mmc_hostname(host->mmc), msm_host->ice.state);
return -EINVAL;
}
if (msm_host->ice.vops->status) {
*ice_status = 0;
stat = msm_host->ice.vops->status(msm_host->ice.pdev);
if (stat < 0) {
pr_err("%s: ice get sts failed %d\n",
mmc_hostname(host->mmc), stat);
return -EINVAL;
}
*ice_status = stat;
}
return 0;
}
void sdhci_msm_ice_print_regs(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
if (msm_host->ice.vops->debug)
msm_host->ice.vops->debug(msm_host->ice.pdev);
}

View File

@ -1,164 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2015, 2017-2019, The Linux Foundation. All rights reserved.
*/
#ifndef __SDHCI_MSM_ICE_H__
#define __SDHCI_MSM_ICE_H__
#include <linux/io.h>
#include <linux/of.h>
#include <linux/blkdev.h>
//#include <crypto/ice.h>
#include "sdhci-msm.h"
#define SDHC_MSM_CRYPTO_LABEL "sdhc-msm-crypto"
/* Timeout waiting for ICE initialization, that requires TZ access */
#define SDHCI_MSM_ICE_COMPLETION_TIMEOUT_MS 500
/*
* SDHCI host controller ICE registers. There are n [0..31]
* of each of these registers
*/
#define NUM_SDHCI_MSM_ICE_CTRL_INFO_n_REGS 32
#define CORE_VENDOR_SPEC_ICE_CTRL 0x300
#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_1_n 0x304
#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_2_n 0x308
#define CORE_VENDOR_SPEC_ICE_CTRL_INFO_3_n 0x30C
/* ICE3.0 register which got added cmdq reg space */
#define ICE_CQ_CAPABILITIES 0x04
#define ICE_HCI_SUPPORT (1 << 28)
#define ICE_CQ_CONFIG 0x08
#define CRYPTO_GENERAL_ENABLE (1 << 1)
#define ICE_NONCQ_CRYPTO_PARAMS 0x70
#define ICE_NONCQ_CRYPTO_DUN 0x74
/* ICE3.0 register which got added hc reg space */
#define HC_VENDOR_SPECIFIC_FUNC4 0x260
#define DISABLE_CRYPTO (1 << 15)
#define HC_VENDOR_SPECIFIC_ICE_CTRL 0x800
#define ICE_SW_RST_EN (1 << 0)
/* SDHCI MSM ICE CTRL Info register offset */
enum {
OFFSET_SDHCI_MSM_ICE_CTRL_INFO_BYPASS = 0,
OFFSET_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX = 1,
OFFSET_SDHCI_MSM_ICE_CTRL_INFO_CDU = 6,
OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CCI = 0,
OFFSET_SDHCI_MSM_ICE_HCI_PARAM_CE = 8,
};
/* SDHCI MSM ICE CTRL Info register masks */
enum {
MASK_SDHCI_MSM_ICE_CTRL_INFO_BYPASS = 0x1,
MASK_SDHCI_MSM_ICE_CTRL_INFO_KEY_INDEX = 0x1F,
MASK_SDHCI_MSM_ICE_CTRL_INFO_CDU = 0x7,
MASK_SDHCI_MSM_ICE_HCI_PARAM_CE = 0x1,
MASK_SDHCI_MSM_ICE_HCI_PARAM_CCI = 0xff
};
/* SDHCI MSM ICE encryption/decryption bypass state */
enum {
SDHCI_MSM_ICE_DISABLE_BYPASS = 0,
SDHCI_MSM_ICE_ENABLE_BYPASS = 1,
};
/* SDHCI MSM ICE Crypto Data Unit of target DUN of Transfer Request */
enum {
SDHCI_MSM_ICE_TR_DATA_UNIT_512_B = 0,
SDHCI_MSM_ICE_TR_DATA_UNIT_1_KB = 1,
SDHCI_MSM_ICE_TR_DATA_UNIT_2_KB = 2,
SDHCI_MSM_ICE_TR_DATA_UNIT_4_KB = 3,
SDHCI_MSM_ICE_TR_DATA_UNIT_8_KB = 4,
SDHCI_MSM_ICE_TR_DATA_UNIT_16_KB = 5,
SDHCI_MSM_ICE_TR_DATA_UNIT_32_KB = 6,
SDHCI_MSM_ICE_TR_DATA_UNIT_64_KB = 7,
};
/* SDHCI MSM ICE internal state */
enum {
SDHCI_MSM_ICE_STATE_DISABLED = 0,
SDHCI_MSM_ICE_STATE_ACTIVE = 1,
SDHCI_MSM_ICE_STATE_SUSPENDED = 2,
};
/* crypto context fields in cmdq data command task descriptor */
#define DATA_UNIT_NUM(x) (((u64)(x) & 0xFFFFFFFF) << 0)
#define CRYPTO_CONFIG_INDEX(x) (((u64)(x) & 0xFF) << 32)
#define CRYPTO_ENABLE(x) (((u64)(x) & 0x1) << 47)
#ifdef CONFIG_MMC_SDHCI_MSM_ICE
int sdhci_msm_ice_get_dev(struct sdhci_host *host);
int sdhci_msm_ice_init(struct sdhci_host *host);
void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot);
int sdhci_msm_ice_cfg(struct sdhci_host *host, struct mmc_request *mrq,
u32 slot);
int sdhci_msm_ice_cmdq_cfg(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot, u64 *ice_ctx);
int sdhci_msm_ice_cfg_end(struct sdhci_host *host, struct mmc_request *mrq);
int sdhci_msm_ice_reset(struct sdhci_host *host);
int sdhci_msm_ice_resume(struct sdhci_host *host);
int sdhci_msm_ice_suspend(struct sdhci_host *host);
int sdhci_msm_ice_get_status(struct sdhci_host *host, int *ice_status);
void sdhci_msm_ice_print_regs(struct sdhci_host *host);
#else
inline int sdhci_msm_ice_get_dev(struct sdhci_host *host)
{
struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
struct sdhci_msm_host *msm_host = pltfm_host->priv;
if (msm_host) {
msm_host->ice.pdev = NULL;
msm_host->ice.vops = NULL;
}
return -ENODEV;
}
inline int sdhci_msm_ice_init(struct sdhci_host *host)
{
return 0;
}
inline void sdhci_msm_ice_cfg_reset(struct sdhci_host *host, u32 slot)
{
}
inline int sdhci_msm_ice_cfg(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot)
{
return 0;
}
static inline int sdhci_msm_ice_cmdq_cfg(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot, u64 *ice_ctx)
{
return 0;
}
static inline int sdhci_msm_ice_cfg_end(struct sdhci_host *host,
struct mmc_request *mrq)
{
return 0;
}
inline int sdhci_msm_ice_reset(struct sdhci_host *host)
{
return 0;
}
inline int sdhci_msm_ice_resume(struct sdhci_host *host)
{
return 0;
}
inline int sdhci_msm_ice_suspend(struct sdhci_host *host)
{
return 0;
}
inline int sdhci_msm_ice_get_status(struct sdhci_host *host,
int *ice_status)
{
return 0;
}
inline void sdhci_msm_ice_print_regs(struct sdhci_host *host)
{
}
#endif /* CONFIG_MMC_SDHCI_MSM_ICE */
#endif /* __SDHCI_MSM_ICE_H__ */

File diff suppressed because it is too large Load Diff

View File

@ -1,272 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2016-2019, The Linux Foundation. All rights reserved.
*/
#ifndef __SDHCI_MSM_H__
#define __SDHCI_MSM_H__
#include <linux/mmc/mmc.h>
#include <linux/pm_qos.h>
#include "sdhci-pltfm.h"
/* This structure keeps information per regulator */
struct sdhci_msm_reg_data {
/* voltage regulator handle */
struct regulator *reg;
/* regulator name */
const char *name;
/* voltage level to be set */
u32 low_vol_level;
u32 high_vol_level;
/* Load values for low power and high power mode */
u32 lpm_uA;
u32 hpm_uA;
/* is this regulator enabled? */
bool is_enabled;
/* is this regulator needs to be always on? */
bool is_always_on;
/* is low power mode setting required for this regulator? */
bool lpm_sup;
bool set_voltage_sup;
};
/*
* This structure keeps information for all the
* regulators required for a SDCC slot.
*/
struct sdhci_msm_slot_reg_data {
/* keeps VDD/VCC regulator info */
struct sdhci_msm_reg_data *vdd_data;
/* keeps VDD IO regulator info */
struct sdhci_msm_reg_data *vdd_io_data;
};
struct sdhci_msm_gpio {
u32 no;
const char *name;
bool is_enabled;
};
struct sdhci_msm_gpio_data {
struct sdhci_msm_gpio *gpio;
u8 size;
};
struct sdhci_msm_pin_data {
/*
* = 1 if controller pins are using gpios
* = 0 if controller has dedicated MSM pads
*/
u8 is_gpio;
struct sdhci_msm_gpio_data *gpio_data;
};
struct sdhci_pinctrl_data {
struct pinctrl *pctrl;
struct pinctrl_state *pins_active;
struct pinctrl_state *pins_sleep;
struct pinctrl_state *pins_drv_type_400KHz;
struct pinctrl_state *pins_drv_type_50MHz;
struct pinctrl_state *pins_drv_type_100MHz;
struct pinctrl_state *pins_drv_type_200MHz;
};
struct sdhci_msm_bus_voting_data {
struct msm_bus_scale_pdata *bus_pdata;
unsigned int *bw_vecs;
unsigned int bw_vecs_size;
};
struct sdhci_msm_cpu_group_map {
int nr_groups;
cpumask_t *mask;
};
struct sdhci_msm_pm_qos_latency {
s32 latency[SDHCI_POWER_POLICY_NUM];
};
struct sdhci_msm_pm_qos_data {
struct sdhci_msm_cpu_group_map cpu_group_map;
enum pm_qos_req_type irq_req_type;
int irq_cpu;
struct sdhci_msm_pm_qos_latency irq_latency;
struct sdhci_msm_pm_qos_latency *cmdq_latency;
struct sdhci_msm_pm_qos_latency *latency;
bool irq_valid;
bool cmdq_valid;
bool legacy_valid;
};
/*
* PM QoS for group voting management - each cpu group defined is associated
* with 1 instance of this structure.
*/
struct sdhci_msm_pm_qos_group {
struct pm_qos_request req;
struct delayed_work unvote_work;
atomic_t counter;
s32 latency;
};
/* PM QoS HW IRQ voting */
struct sdhci_msm_pm_qos_irq {
struct pm_qos_request req;
struct delayed_work unvote_work;
struct device_attribute enable_attr;
struct device_attribute status_attr;
atomic_t counter;
s32 latency;
bool enabled;
};
struct sdhci_msm_pltfm_data {
/* Supported UHS-I Modes */
u32 caps;
/* More capabilities */
u32 caps2;
unsigned long mmc_bus_width;
struct sdhci_msm_slot_reg_data *vreg_data;
bool nonremovable;
bool nonhotplug;
bool largeaddressbus;
bool pin_cfg_sts;
struct sdhci_msm_pin_data *pin_data;
struct sdhci_pinctrl_data *pctrl_data;
int status_gpio; /* card detection GPIO that is configured as IRQ */
struct sdhci_msm_bus_voting_data *voting_data;
u32 *sup_clk_table;
unsigned char sup_clk_cnt;
int sdiowakeup_irq;
u32 *sup_ice_clk_table;
unsigned char sup_ice_clk_cnt;
struct sdhci_msm_pm_qos_data pm_qos_data;
u32 ice_clk_max;
u32 ice_clk_min;
u32 ddr_config;
bool rclk_wa;
u32 *bus_clk_table;
unsigned char bus_clk_cnt;
};
struct sdhci_msm_bus_vote {
uint32_t client_handle;
uint32_t curr_vote;
int min_bw_vote;
int max_bw_vote;
bool is_max_bw_needed;
struct delayed_work vote_work;
struct device_attribute max_bus_bw;
};
struct sdhci_msm_ice_data {
struct qcom_ice_variant_ops *vops;
struct platform_device *pdev;
int state;
};
struct sdhci_msm_regs_restore {
bool is_supported;
bool is_valid;
u32 vendor_pwrctl_mask;
u32 vendor_pwrctl_ctl;
u32 vendor_caps_0;
u32 vendor_func;
u32 vendor_func2;
u32 vendor_func3;
u32 hc_2c_2e;
u32 hc_28_2a;
u32 hc_34_36;
u32 hc_38_3a;
u32 hc_3c_3e;
u32 hc_caps_1;
u32 testbus_config;
u32 dll_config;
u32 dll_config2;
u32 dll_config3;
u32 dll_usr_ctl;
};
struct sdhci_msm_debug_data {
struct mmc_host copy_mmc;
struct mmc_card copy_card;
struct sdhci_host copy_host;
};
struct sdhci_msm_host {
struct platform_device *pdev;
void __iomem *core_mem; /* MSM SDCC mapped address */
void __iomem *cryptoio; /* ICE HCI mapped address */
bool ice_hci_support;
int pwr_irq; /* power irq */
struct clk *clk; /* main SD/MMC bus clock */
struct clk *pclk; /* SDHC peripheral bus clock */
struct clk *bus_aggr_clk; /* Axi clock shared with UFS */
struct clk *bus_clk; /* SDHC bus voter clock */
struct clk *ff_clk; /* CDC calibration fixed feedback clock */
struct clk *sleep_clk; /* CDC calibration sleep clock */
struct clk *ice_clk; /* SDHC peripheral ICE clock */
atomic_t clks_on; /* Set if clocks are enabled */
struct sdhci_msm_pltfm_data *pdata;
struct mmc_host *mmc;
struct sdhci_msm_debug_data cached_data;
struct sdhci_pltfm_data sdhci_msm_pdata;
u32 curr_pwr_state;
u32 curr_io_level;
struct completion pwr_irq_completion;
struct sdhci_msm_bus_vote msm_bus_vote;
struct device_attribute polling;
u32 clk_rate; /* Keeps track of current clock rate that is set */
bool tuning_done;
bool calibration_done;
u8 saved_tuning_phase;
bool en_auto_cmd21;
struct device_attribute auto_cmd21_attr;
bool is_sdiowakeup_enabled;
bool sdio_pending_processing;
atomic_t controller_clock;
bool use_cdclp533;
bool use_updated_dll_reset;
bool use_14lpp_dll;
bool enhanced_strobe;
bool rclk_delay_fix;
u32 caps_0;
struct sdhci_msm_ice_data ice;
u32 ice_clk_rate;
struct sdhci_msm_pm_qos_group *pm_qos;
int pm_qos_prev_cpu;
struct device_attribute pm_qos_group_enable_attr;
struct device_attribute pm_qos_group_status_attr;
bool pm_qos_group_enable;
struct sdhci_msm_pm_qos_irq pm_qos_irq;
bool tuning_in_progress;
bool mci_removed;
const struct sdhci_msm_offset *offset;
bool core_3_0v_support;
bool pltfm_init_done;
struct sdhci_msm_regs_restore regs_restore;
bool use_7nm_dll;
int soc_min_rev;
struct workqueue_struct *pm_qos_wq;
bool use_cdr;
u32 transfer_mode;
};
extern char *saved_command_line;
void sdhci_msm_pm_qos_irq_init(struct sdhci_host *host);
void sdhci_msm_pm_qos_irq_vote(struct sdhci_host *host);
void sdhci_msm_pm_qos_irq_unvote(struct sdhci_host *host, bool async);
void sdhci_msm_pm_qos_cpu_init(struct sdhci_host *host,
struct sdhci_msm_pm_qos_latency *latency);
void sdhci_msm_pm_qos_cpu_vote(struct sdhci_host *host,
struct sdhci_msm_pm_qos_latency *latency, int cpu);
bool sdhci_msm_pm_qos_cpu_unvote(struct sdhci_host *host, int cpu, bool async);
#endif /* __SDHCI_MSM_H__ */

View File

@ -121,7 +121,6 @@ struct sdhci_host *sdhci_pltfm_init(struct platform_device *pdev,
struct resource *iomem;
void __iomem *ioaddr;
int irq, ret;
struct extcon_dev *extcon;
iomem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
ioaddr = devm_ioremap_resource(&pdev->dev, iomem);
@ -157,14 +156,6 @@ struct sdhci_host *sdhci_pltfm_init(struct platform_device *pdev,
host->quirks2 = pdata->quirks2;
}
extcon = extcon_get_edev_by_phandle(&pdev->dev, 0);
if (IS_ERR(extcon) && PTR_ERR(extcon) != -ENODEV) {
ret = PTR_ERR(extcon);
goto err;
}
if (!IS_ERR(extcon))
host->mmc->extcon = extcon;
platform_set_drvdata(pdev, host);
return host;

View File

@ -23,7 +23,6 @@ struct sdhci_pltfm_data {
struct sdhci_pltfm_host {
struct clk *clk;
void *priv; /* to handle quirks across io-accessor calls */
/* migrate from sdhci_of_host */
unsigned int clock;

File diff suppressed because it is too large Load Diff

View File

@ -19,7 +19,7 @@
#include <linux/io.h>
#include <linux/leds.h>
#include <linux/interrupt.h>
#include <linux/ratelimit.h>
#include <linux/mmc/host.h>
/*
@ -151,16 +151,12 @@
#define SDHCI_INT_ERROR_MASK 0xFFFF8000
#define SDHCI_INT_CMD_MASK (SDHCI_INT_RESPONSE | SDHCI_INT_TIMEOUT | \
SDHCI_INT_CRC | SDHCI_INT_END_BIT | SDHCI_INT_INDEX | \
SDHCI_INT_ACMD12ERR)
SDHCI_INT_CRC | SDHCI_INT_END_BIT | SDHCI_INT_INDEX)
#define SDHCI_INT_DATA_MASK (SDHCI_INT_DATA_END | SDHCI_INT_DMA_END | \
SDHCI_INT_DATA_AVAIL | SDHCI_INT_SPACE_AVAIL | \
SDHCI_INT_DATA_TIMEOUT | SDHCI_INT_DATA_CRC | \
SDHCI_INT_DATA_END_BIT | SDHCI_INT_ADMA_ERROR | \
SDHCI_INT_BLK_GAP)
#define SDHCI_INT_CMDQ_EN (0x1 << 14)
#define SDHCI_INT_ALL_MASK ((unsigned int)-1)
#define SDHCI_CQE_INT_ERR_MASK ( \
@ -170,13 +166,7 @@
#define SDHCI_CQE_INT_MASK (SDHCI_CQE_INT_ERR_MASK | SDHCI_INT_CQE)
#define SDHCI_ACMD12_ERR 0x3C
#define SDHCI_AUTO_CMD12_NOT_EXEC 0x0001
#define SDHCI_AUTO_CMD_TIMEOUT_ERR 0x0002
#define SDHCI_AUTO_CMD_CRC_ERR 0x0004
#define SDHCI_AUTO_CMD_ENDBIT_ERR 0x0008
#define SDHCI_AUTO_CMD_INDEX_ERR 0x0010
#define SDHCI_AUTO_CMD12_NOT_ISSUED 0x0080
#define SDHCI_ACMD12_ERR 0x3C
#define SDHCI_HOST_CONTROL2 0x3E
#define SDHCI_CTRL_UHS_MASK 0x0007
@ -194,7 +184,6 @@
#define SDHCI_CTRL_DRV_TYPE_D 0x0030
#define SDHCI_CTRL_EXEC_TUNING 0x0040
#define SDHCI_CTRL_TUNED_CLK 0x0080
#define SDHCI_CTRL_ASYNC_INT_ENABLE 0x4000
#define SDHCI_CTRL_PRESET_VAL_ENABLE 0x8000
#define SDHCI_CAPABILITIES 0x40
@ -216,7 +205,6 @@
#define SDHCI_CAN_VDD_300 0x02000000
#define SDHCI_CAN_VDD_180 0x04000000
#define SDHCI_CAN_64BIT 0x10000000
#define SDHCI_CAN_ASYNC_INT 0x20000000
#define SDHCI_SUPPORT_SDR50 0x00000001
#define SDHCI_SUPPORT_SDR104 0x00000002
@ -358,12 +346,6 @@ enum sdhci_cookie {
COOKIE_MAPPED, /* mapped by sdhci_prepare_data() */
};
enum sdhci_power_policy {
SDHCI_PERFORMANCE_MODE,
SDHCI_POWER_SAVE_MODE,
SDHCI_POWER_POLICY_NUM /* Always keep this one last */
};
struct sdhci_host {
/* Data set by hardware interface driver */
const char *hw_name; /* Hardware bus name */
@ -469,83 +451,6 @@ struct sdhci_host {
*/
#define SDHCI_QUIRK2_DISABLE_HW_TIMEOUT (1<<17)
/*
* Read Transfer Active/ Write Transfer Active may be not
* de-asserted after end of transaction. Issue reset for DAT line.
*/
#define SDHCI_QUIRK2_RDWR_TX_ACTIVE_EOT (1<<18)
/*
* Slow interrupt clearance at 400KHz may cause
* host controller driver interrupt handler to
* be called twice.
*/
#define SDHCI_QUIRK2_SLOW_INT_CLR (1<<19)
/*
* If the base clock can be scalable, then there should be no further
* clock dividing as the input clock itself will be scaled down to
* required frequency.
*/
#define SDHCI_QUIRK2_ALWAYS_USE_BASE_CLOCK (1<<20)
/*
* Ignore data timeout error for R1B commands as there will be no
* data associated and the busy timeout value for these commands
* could be lager than the maximum timeout value that controller
* can handle.
*/
#define SDHCI_QUIRK2_IGNORE_DATATOUT_FOR_R1BCMD (1<<21)
/*
* The preset value registers are not properly initialized by
* some hardware and hence preset value must not be enabled for
* such controllers.
*/
#define SDHCI_QUIRK2_BROKEN_PRESET_VALUE (1<<22)
/*
* Some controllers define the usage of 0xF in data timeout counter
* register (0x2E) which is actually a reserved bit as per
* specification.
*/
#define SDHCI_QUIRK2_USE_RESERVED_MAX_TIMEOUT (1<<23)
/*
* This is applicable for controllers that advertize timeout clock
* value in capabilities register (bit 5-0) as just 50MHz whereas the
* base clock frequency is 200MHz. So, the controller internally
* multiplies the value in timeout control register by 4 with the
* assumption that driver always uses fixed timeout clock value from
* capabilities register to calculate the timeout. But when the driver
* uses SDHCI_QUIRK2_ALWAYS_USE_BASE_CLOCK base clock frequency is directly
* controller by driver and it's rate varies upto max. 200MHz. This new quirk
* will be used in such cases to avoid controller mulplication when timeout is
* calculated based on the base clock.
*/
#define SDHCI_QUIRK2_DIVIDE_TOUT_BY_4 (1 << 24)
/*
* Some SDHC controllers are unable to handle data-end bit error in
* 1-bit mode of SDIO.
*/
#define SDHCI_QUIRK2_IGN_DATA_END_BIT_ERROR (1<<25)
/* Use reset workaround in case sdhci reset timeouts */
#define SDHCI_QUIRK2_USE_RESET_WORKAROUND (1<<26)
/* Some controllers doesn't have have any LED control */
#define SDHCI_QUIRK2_BROKEN_LED_CONTROL (1<<27)
/*
* Some controllers doesn't follow the tuning procedure as defined in spec.
* The tuning data has to be compared from SW driver to validate the correct
* phase.
*/
#define SDHCI_QUIRK2_NON_STANDARD_TUNING (1 << 28)
/*
* Some controllers may use PIO mode to workaround HW issues in ADMA for
* eMMC tuning commands.
*/
#define SDHCI_QUIRK2_USE_PIO_FOR_EMMC_TUNING (1 << 29)
int irq; /* Device IRQ */
void __iomem *ioaddr; /* Mapped address */
char *bounce_buffer; /* For packing SDMA reads/writes */
@ -558,7 +463,6 @@ struct sdhci_host {
struct mmc_host *mmc; /* MMC structure */
struct mmc_host_ops mmc_host_ops; /* MMC host ops */
u64 dma_mask; /* custom DMA mask */
u64 coherent_dma_mask;
#if IS_ENABLED(CONFIG_LEDS_CLASS)
struct led_classdev led; /* LED control */
@ -582,7 +486,6 @@ struct sdhci_host {
#define SDHCI_SIGNALING_330 (1<<14) /* Host is capable of 3.3V signaling */
#define SDHCI_SIGNALING_180 (1<<15) /* Host is capable of 1.8V signaling */
#define SDHCI_SIGNALING_120 (1<<16) /* Host is capable of 1.2V signaling */
#define SDHCI_HOST_IRQ_STATUS (1<<17) /* host->irq status */
unsigned int version; /* SDHCI spec. version */
@ -598,10 +501,8 @@ struct sdhci_host {
bool preset_enabled; /* Preset is enabled */
bool pending_reset; /* Cmd/data reset is pending */
bool irq_wake_enabled; /* IRQ wakeup is enabled */
bool cdr_support;
struct mmc_request *mrqs_done[SDHCI_MAX_MRQS]; /* Requests done */
struct mmc_request *mrq; /* Current request */
struct mmc_command *cmd; /* Current command */
struct mmc_command *data_cmd; /* Current data command */
struct mmc_data *data; /* Current data request */
@ -664,20 +565,6 @@ struct sdhci_host {
u64 data_timeout;
ktime_t data_start_time;
enum sdhci_power_policy power_policy;
bool sdio_irq_async_status;
bool is_crypto_en;
bool crypto_reset_reqd;
u32 auto_cmd_err_sts;
struct ratelimit_state dbg_dump_rs;
int reset_wa_applied; /* reset workaround status */
ktime_t reset_wa_t; /* time when the reset workaround is applied */
int reset_wa_cnt; /* total number of times workaround is used */
unsigned long private[0] ____cacheline_aligned;
};
@ -711,44 +598,11 @@ struct sdhci_ops {
unsigned int (*get_ro)(struct sdhci_host *host);
void (*reset)(struct sdhci_host *host, u8 mask);
int (*platform_execute_tuning)(struct sdhci_host *host, u32 opcode);
int (*crypto_engine_cfg)(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot);
int (*crypto_engine_cmdq_cfg)(struct sdhci_host *host,
struct mmc_request *mrq, u32 slot, u64 *ice_ctx);
int (*crypto_engine_cfg_end)(struct sdhci_host *host,
struct mmc_request *mrq);
int (*crypto_engine_reset)(struct sdhci_host *host);
void (*crypto_cfg_reset)(struct sdhci_host *host, unsigned int slot);
void (*set_uhs_signaling)(struct sdhci_host *host, unsigned int uhs);
void (*hw_reset)(struct sdhci_host *host);
void (*adma_workaround)(struct sdhci_host *host, u32 intmask);
unsigned int (*get_max_segments)(void);
#define REQ_BUS_OFF (1 << 0)
#define REQ_BUS_ON (1 << 1)
#define REQ_IO_LOW (1 << 2)
#define REQ_IO_HIGH (1 << 3)
void (*card_event)(struct sdhci_host *host);
int (*enhanced_strobe)(struct sdhci_host *host);
void (*platform_bus_voting)(struct sdhci_host *host, u32 enable);
void (*toggle_cdr)(struct sdhci_host *host, bool enable);
void (*check_power_status)(struct sdhci_host *host, u32 req_type);
int (*config_auto_tuning_cmd)(struct sdhci_host *host,
bool enable, u32 type);
int (*enable_controller_clock)(struct sdhci_host *host);
void (*clear_set_dumpregs)(struct sdhci_host *host, bool set);
void (*enhanced_strobe_mask)(struct sdhci_host *host, bool set);
void (*dump_vendor_regs)(struct sdhci_host *host);
void (*voltage_switch)(struct sdhci_host *host);
int (*select_drive_strength)(struct sdhci_host *host,
struct mmc_card *card,
unsigned int max_dtr, int host_drv,
int card_drv, int *drv_type);
int (*notify_load)(struct sdhci_host *host, enum mmc_load state);
void (*reset_workaround)(struct sdhci_host *host, u32 enable);
void (*init)(struct sdhci_host *host);
void (*pre_req)(struct sdhci_host *host, struct mmc_request *req);
void (*post_req)(struct sdhci_host *host, struct mmc_request *req);
unsigned int (*get_current_limit)(struct sdhci_host *host);
};
#ifdef CONFIG_MMC_SDHCI_IO_ACCESSORS
@ -899,5 +753,4 @@ void sdhci_end_tuning(struct sdhci_host *host);
void sdhci_reset_tuning(struct sdhci_host *host);
void sdhci_send_tuning(struct sdhci_host *host, u32 opcode);
void sdhci_cfg_irq(struct sdhci_host *host, bool enable, bool sync);
#endif /* __SDHCI_HW_H */

View File

@ -11,10 +11,7 @@
#define LINUX_MMC_CARD_H
#include <linux/device.h>
#include <linux/mmc/core.h>
#include <linux/mmc/mmc.h>
#include <linux/mod_devicetable.h>
#include <linux/notifier.h>
struct mmc_cid {
unsigned int manfid;
@ -63,7 +60,7 @@ struct mmc_ext_csd {
unsigned int part_time; /* Units: ms */
unsigned int sa_timeout; /* Units: 100ns */
unsigned int generic_cmd6_time; /* Units: 10ms */
unsigned int power_off_longtime; /* Units: ms */
unsigned int power_off_longtime; /* Units: ms */
u8 power_off_notification; /* state */
unsigned int hs_max_dtr;
unsigned int hs200_max_dtr;
@ -91,8 +88,6 @@ struct mmc_ext_csd {
unsigned int data_tag_unit_size; /* DATA TAG UNIT size */
unsigned int boot_ro_lock; /* ro lock support */
bool boot_ro_lockable;
u8 raw_ext_csd_cmdq; /* 15 */
u8 raw_ext_csd_cache_ctrl; /* 33 */
bool ffu_capable; /* Firmware upgrade support */
bool cmdq_en; /* Command Queue enabled */
bool cmdq_support; /* Command Queue supported */
@ -103,10 +98,7 @@ struct mmc_ext_csd {
u8 raw_partition_support; /* 160 */
u8 raw_rpmb_size_mult; /* 168 */
u8 raw_erased_mem_count; /* 181 */
u8 raw_ext_csd_bus_width; /* 183 */
u8 strobe_support; /* 184 */
#define MMC_STROBE_SUPPORT (1 << 0)
u8 raw_ext_csd_hs_timing; /* 185 */
u8 raw_ext_csd_structure; /* 194 */
u8 raw_card_type; /* 196 */
u8 raw_driver_strength; /* 197 */
@ -127,18 +119,13 @@ struct mmc_ext_csd {
u8 raw_pwr_cl_200_360; /* 237 */
u8 raw_pwr_cl_ddr_52_195; /* 238 */
u8 raw_pwr_cl_ddr_52_360; /* 239 */
u8 cache_flush_policy; /* 240 */
#define MMC_BKOPS_URGENCY_MASK 0x3
u8 raw_pwr_cl_ddr_200_360; /* 253 */
u8 raw_bkops_status; /* 246 */
u8 raw_sectors[4]; /* 212 - 4 bytes */
u8 pre_eol_info; /* 267 */
u8 device_life_time_est_typ_a; /* 268 */
u8 device_life_time_est_typ_b; /* 269 */
u8 barrier_support; /* 486 */
u8 barrier_en;
u8 fw_version; /* 254 */
unsigned int feature_support;
#define MMC_DISCARD_FEATURE BIT(0) /* CMD38 feature */
};
@ -210,8 +197,7 @@ struct sdio_cccr {
wide_bus:1,
high_power:1,
high_speed:1,
disable_cd:1,
async_intr_sup:1;
disable_cd:1;
};
struct sdio_cis {
@ -222,7 +208,6 @@ struct sdio_cis {
};
struct mmc_host;
struct mmc_ios;
struct sdio_func;
struct sdio_func_tuple;
struct mmc_queue_req;
@ -253,62 +238,6 @@ struct mmc_part {
#define MMC_BLK_DATA_AREA_RPMB (1<<3)
};
enum {
MMC_BKOPS_NO_OP,
MMC_BKOPS_NOT_CRITICAL,
MMC_BKOPS_PERF_IMPACT,
MMC_BKOPS_CRITICAL,
MMC_BKOPS_NUM_SEVERITY_LEVELS,
};
/**
* struct mmc_bkops_stats - BKOPS statistics
* @lock: spinlock used for synchronizing the debugfs and the runtime accesses
* to this structure. No need to call with spin_lock_irq api
* @manual_start: number of times START_BKOPS was sent to the device
* @hpi: number of times HPI was sent to the device
* @auto_start: number of times AUTO_EN was set to 1
* @auto_stop: number of times AUTO_EN was set to 0
* @level: number of times the device reported the need for each level of
* bkops handling
* @enabled: control over whether statistics should be gathered
*
* This structure is used to collect statistics regarding the bkops
* configuration and use-patterns. It is collected during runtime and can be
* shown to the user via a debugfs entry.
*/
struct mmc_bkops_stats {
spinlock_t lock;
unsigned int manual_start;
unsigned int hpi;
unsigned int auto_start;
unsigned int auto_stop;
unsigned int level[MMC_BKOPS_NUM_SEVERITY_LEVELS];
bool enabled;
};
/**
* struct mmc_bkops_info - BKOPS data
* @stats: statistic information regarding bkops
* @needs_check: indication whether need to check with the device
* whether it requires handling of BKOPS (CMD8)
* @needs_manual: indication whether have to send START_BKOPS
* to the device
*/
struct mmc_bkops_info {
struct mmc_bkops_stats stats;
bool needs_check;
bool needs_bkops;
u32 retry_counter;
};
enum mmc_pon_type {
MMC_LONG_PON = 1,
MMC_SHRT_PON,
};
#define mmc_card_strobe(c) (((c)->ext_csd).strobe_support & MMC_STROBE_SUPPORT)
/*
* MMC device
*/
@ -316,10 +245,6 @@ struct mmc_card {
struct mmc_host *host; /* the host this device belongs to */
struct device dev; /* the device */
u32 ocr; /* the current OCR setting */
unsigned long clk_scaling_lowest; /* lowest scaleable*/
/* frequency */
unsigned long clk_scaling_highest; /* highest scaleable */
/* frequency */
unsigned int rca; /* relative card address of device */
unsigned int type; /* card type */
#define MMC_TYPE_MMC 0 /* MMC card */
@ -334,8 +259,6 @@ struct mmc_card {
/* for byte mode */
#define MMC_QUIRK_NONSTD_SDIO (1<<2) /* non-standard SDIO card attached */
/* (missing CIA registers) */
#define MMC_QUIRK_BROKEN_CLK_GATING (1<<3) /* clock gating the sdio bus */
/* will make card fail */
#define MMC_QUIRK_NONSTD_FUNC_IF (1<<4) /* SDIO card has nonstd function interfaces */
#define MMC_QUIRK_DISABLE_CD (1<<5) /* disconnect CD/DAT[3] resistor */
#define MMC_QUIRK_INAND_CMD38 (1<<6) /* iNAND devices have broken CMD38 */
@ -347,14 +270,6 @@ struct mmc_card {
#define MMC_QUIRK_BROKEN_IRQ_POLLING (1<<11) /* Polling SDIO_CCCR_INTx could create a fake interrupt */
#define MMC_QUIRK_TRIM_BROKEN (1<<12) /* Skip trim */
#define MMC_QUIRK_BROKEN_HPI (1<<13) /* Disable broken HPI support */
/* byte mode */
#define MMC_QUIRK_INAND_DATA_TIMEOUT (1<<14) /* For incorrect data timeout */
#define MMC_QUIRK_CACHE_DISABLE (1 << 15) /* prevent cache enable */
#define MMC_QUIRK_QCA6574_SETTINGS (1 << 16) /* QCA6574 card settings*/
#define MMC_QUIRK_QCA9377_SETTINGS (1 << 17) /* QCA9377 card settings*/
/* Make sure CMDQ is empty before queuing DCMD */
#define MMC_QUIRK_CMDQ_EMPTY_BEFORE_DCMD (1 << 18)
bool reenable_cmdq; /* Re-enable Command Queue */
@ -390,13 +305,10 @@ struct mmc_card {
struct dentry *debugfs_root;
struct mmc_part part[MMC_NUM_PHY_PARTITION]; /* physical partitions */
unsigned int nr_parts;
unsigned int part_curr;
unsigned int nr_parts;
struct notifier_block reboot_notify;
enum mmc_pon_type pon_type;
struct mmc_bkops_info bkops;
struct workqueue_struct *complete_wq; /* Private workqueue */
unsigned int bouncesz; /* Bounce buffer size */
};
static inline bool mmc_large_sector(struct mmc_card *card)
@ -404,53 +316,10 @@ static inline bool mmc_large_sector(struct mmc_card *card)
return card->ext_csd.data_sector_size == 4096;
}
/* extended CSD mapping to mmc version */
enum mmc_version_ext_csd_rev {
MMC_V4_0,
MMC_V4_1,
MMC_V4_2,
MMC_V4_41 = 5,
MMC_V4_5,
MMC_V4_51 = MMC_V4_5,
MMC_V5_0,
MMC_V5_01 = MMC_V5_0,
MMC_V5_1
};
bool mmc_card_is_blockaddr(struct mmc_card *card);
#define mmc_card_mmc(c) ((c)->type == MMC_TYPE_MMC)
#define mmc_card_sd(c) ((c)->type == MMC_TYPE_SD)
#define mmc_card_sdio(c) ((c)->type == MMC_TYPE_SDIO)
static inline bool mmc_card_support_auto_bkops(const struct mmc_card *c)
{
return c->ext_csd.rev >= MMC_V5_1;
}
static inline bool mmc_card_configured_manual_bkops(const struct mmc_card *c)
{
return c->ext_csd.man_bkops_en;
}
static inline bool mmc_card_configured_auto_bkops(const struct mmc_card *c)
{
return c->ext_csd.auto_bkops_en;
}
static inline bool mmc_enable_qca6574_settings(const struct mmc_card *c)
{
return c->quirks & MMC_QUIRK_QCA6574_SETTINGS;
}
static inline bool mmc_enable_qca9377_settings(const struct mmc_card *c)
{
return c->quirks & MMC_QUIRK_QCA9377_SETTINGS;
}
#define mmc_dev_to_card(d) container_of(d, struct mmc_card, dev)
#define mmc_get_drvdata(c) dev_get_drvdata(&(c)->dev)
#define mmc_set_drvdata(c, d) dev_set_drvdata(&(c)->dev, d)
extern int mmc_send_pon(struct mmc_card *card);
#endif /* LINUX_MMC_CARD_H */

View File

@ -8,7 +8,6 @@
#ifndef LINUX_MMC_CORE_H
#define LINUX_MMC_CORE_H
#include <uapi/linux/mmc/core.h>
#include <linux/completion.h>
#include <linux/types.h>
@ -35,6 +34,38 @@ struct mmc_command {
#define MMC_CMD23_ARG_TAG_REQ (1 << 29)
u32 resp[4];
unsigned int flags; /* expected response type */
#define MMC_RSP_PRESENT (1 << 0)
#define MMC_RSP_136 (1 << 1) /* 136 bit response */
#define MMC_RSP_CRC (1 << 2) /* expect valid crc */
#define MMC_RSP_BUSY (1 << 3) /* card may send busy */
#define MMC_RSP_OPCODE (1 << 4) /* response contains opcode */
#define MMC_CMD_MASK (3 << 5) /* non-SPI command type */
#define MMC_CMD_AC (0 << 5)
#define MMC_CMD_ADTC (1 << 5)
#define MMC_CMD_BC (2 << 5)
#define MMC_CMD_BCR (3 << 5)
#define MMC_RSP_SPI_S1 (1 << 7) /* one status byte */
#define MMC_RSP_SPI_S2 (1 << 8) /* second byte */
#define MMC_RSP_SPI_B4 (1 << 9) /* four data bytes */
#define MMC_RSP_SPI_BUSY (1 << 10) /* card may send busy */
/*
* These are the native response types, and correspond to valid bit
* patterns of the above flags. One additional valid pattern
* is all zeros, which means we don't expect a response.
*/
#define MMC_RSP_NONE (0)
#define MMC_RSP_R1 (MMC_RSP_PRESENT|MMC_RSP_CRC|MMC_RSP_OPCODE)
#define MMC_RSP_R1B \
(MMC_RSP_PRESENT|MMC_RSP_CRC|MMC_RSP_OPCODE|MMC_RSP_BUSY)
#define MMC_RSP_R2 (MMC_RSP_PRESENT|MMC_RSP_136|MMC_RSP_CRC)
#define MMC_RSP_R3 (MMC_RSP_PRESENT)
#define MMC_RSP_R4 (MMC_RSP_PRESENT)
#define MMC_RSP_R5 (MMC_RSP_PRESENT|MMC_RSP_CRC|MMC_RSP_OPCODE)
#define MMC_RSP_R6 (MMC_RSP_PRESENT|MMC_RSP_CRC|MMC_RSP_OPCODE)
#define MMC_RSP_R7 (MMC_RSP_PRESENT|MMC_RSP_CRC|MMC_RSP_OPCODE)
/* Can be used by core to poll after switch to MMC HS mode */
#define MMC_RSP_R1_NO_CRC (MMC_RSP_PRESENT|MMC_RSP_OPCODE)
@ -82,8 +113,6 @@ struct mmc_command {
unsigned int busy_timeout; /* busy detect timeout in ms */
/* Set this flag only for blocking sanitize request */
bool sanitize_busy;
/* Set this flag only for blocking bkops request */
bool bkops_busy;
struct mmc_data *data; /* data segment associated with cmd */
struct mmc_request *mrq; /* associated request */
@ -116,7 +145,6 @@ struct mmc_data {
int sg_count; /* mapped sg entries */
struct scatterlist *sg; /* I/O scatter list */
s32 host_cookie; /* host private data */
bool fault_injected; /* fault injected */
};
struct mmc_host;
@ -145,16 +173,6 @@ struct mmc_request {
struct mmc_card;
extern void mmc_check_bkops(struct mmc_card *card);
extern void mmc_start_manual_bkops(struct mmc_card *card);
extern int mmc_set_auto_bkops(struct mmc_card *card, bool enable);
extern int mmc_suspend_clk_scaling(struct mmc_host *host);
extern void mmc_flush_detect_work(struct mmc_host *host);
extern int mmc_try_claim_host(struct mmc_host *host, unsigned int delay);
extern void __mmc_put_card(struct mmc_card *card);
extern void mmc_blk_init_bkops_statistics(struct mmc_card *card);
extern void mmc_deferred_scaling(struct mmc_host *host);
void mmc_wait_for_req(struct mmc_host *host, struct mmc_request *mrq);
int mmc_wait_for_cmd(struct mmc_host *host, struct mmc_command *cmd,
int retries);

View File

@ -12,25 +12,17 @@
#include <linux/sched.h>
#include <linux/device.h>
#include <linux/devfreq.h>
#include <linux/fault-inject.h>
#include <linux/blkdev.h>
#include <linux/extcon.h>
#include <linux/mmc/core.h>
#include <linux/mmc/card.h>
#include <linux/mmc/pm.h>
#include <linux/dma-direction.h>
#include <linux/mmc/ring_buffer.h>
#define MMC_AUTOSUSPEND_DELAY_MS 3000
struct mmc_ios {
unsigned int clock; /* clock rate */
unsigned int old_rate; /* saved clock rate */
unsigned long clk_ts; /* time stamp of last updated clock */
unsigned int clock; /* clock rate */
unsigned short vdd;
unsigned int power_delay_ms; /* waiting for stable power */
unsigned int power_delay_ms; /* waiting for stable power */
/* vdd stores the bit number of the selected voltage range from below. */
@ -90,37 +82,7 @@ struct mmc_ios {
struct mmc_host;
/* states to represent load on the host */
enum mmc_load {
MMC_LOAD_HIGH,
MMC_LOAD_LOW,
};
enum {
MMC_ERR_CMD_TIMEOUT,
MMC_ERR_CMD_CRC,
MMC_ERR_DAT_TIMEOUT,
MMC_ERR_DAT_CRC,
MMC_ERR_AUTO_CMD,
MMC_ERR_ADMA,
MMC_ERR_TUNING,
MMC_ERR_CMDQ_RED,
MMC_ERR_CMDQ_GCE,
MMC_ERR_CMDQ_ICCE,
MMC_ERR_REQ_TIMEOUT,
MMC_ERR_CMDQ_REQ_TIMEOUT,
MMC_ERR_ICE_CFG,
MMC_ERR_MAX,
};
struct mmc_host_ops {
int (*init)(struct mmc_host *host);
/*
* 'enable' is called when the host is claimed and 'disable' is called
* when the host is released. 'enable' and 'disable' are deprecated.
*/
int (*enable)(struct mmc_host *host);
int (*disable)(struct mmc_host *host);
/*
* It is optional for the host to implement pre_req and post_req in
* order to support double buffering of requests (prepare one
@ -184,7 +146,6 @@ struct mmc_host_ops {
/* Prepare HS400 target operating frequency depending host driver */
int (*prepare_hs400_tuning)(struct mmc_host *host, struct mmc_ios *ios);
int (*enhanced_strobe)(struct mmc_host *host);
/* Prepare for switching from HS400 to HS200 */
void (*hs400_downgrade)(struct mmc_host *host);
@ -207,13 +168,6 @@ struct mmc_host_ops {
*/
int (*multi_io_quirk)(struct mmc_card *card,
unsigned int direction, int blk_size);
unsigned long (*get_max_frequency)(struct mmc_host *host);
unsigned long (*get_min_frequency)(struct mmc_host *host);
int (*notify_load)(struct mmc_host *mmc, enum mmc_load);
void (*notify_halt)(struct mmc_host *mmc, bool halt);
void (*force_err_irq)(struct mmc_host *host, u64 errmask);
};
struct mmc_cqe_ops {
@ -293,14 +247,12 @@ struct mmc_slot {
* @is_new_req wake up reason was new request
* @is_waiting_last_req mmc context waiting for single running request
* @wait wait queue
* @lock lock to protect data fields
*/
struct mmc_context_info {
bool is_done_rcv;
bool is_new_req;
bool is_waiting_last_req;
wait_queue_head_t wait;
spinlock_t lock;
};
struct regulator;
@ -315,67 +267,9 @@ struct mmc_ctx {
struct task_struct *task;
};
enum dev_state {
DEV_SUSPENDING = 1,
DEV_SUSPENDED,
DEV_RESUMED,
};
/**
* struct mmc_devfeq_clk_scaling - main context for MMC clock scaling logic
*
* @lock: spinlock to protect statistics
* @devfreq: struct that represent mmc-host as a client for devfreq
* @devfreq_profile: MMC device profile, mostly polling interval and callbacks
* @ondemand_gov_data: struct supplied to ondemmand governor (thresholds)
* @state: load state, can be HIGH or LOW. used to notify mmc_host_ops callback
* @start_busy: timestamped armed once a data request is started
* @measure_interval_start: timestamped armed once a measure interval started
* @devfreq_abort: flag to sync between different contexts relevant to devfreq
* @skip_clk_scale_freq_update: flag that enable/disable frequency change
* @freq_table_sz: table size of frequencies supplied to devfreq
* @freq_table: frequencies table supplied to devfreq
* @curr_freq: current frequency
* @polling_delay_ms: polling interval for status collection used by devfreq
* @upthreshold: up-threshold supplied to ondemand governor
* @downthreshold: down-threshold supplied to ondemand governor
* @need_freq_change: flag indicating if a frequency change is required
* @is_busy_started: flag indicating if a request is handled by the HW
* @enable: flag indicating if the clock scaling logic is enabled for this host
* @is_suspended: to make devfreq request queued when mmc is suspened
*/
struct mmc_devfeq_clk_scaling {
spinlock_t lock;
struct devfreq *devfreq;
struct devfreq_dev_profile devfreq_profile;
struct devfreq_simple_ondemand_data ondemand_gov_data;
enum mmc_load state;
ktime_t start_busy;
ktime_t measure_interval_start;
atomic_t devfreq_abort;
bool skip_clk_scale_freq_update;
int freq_table_sz;
int pltfm_freq_table_sz;
u32 *freq_table;
u32 *pltfm_freq_table;
unsigned long total_busy_time_us;
unsigned long target_freq;
unsigned long curr_freq;
unsigned long polling_delay_ms;
unsigned int upthreshold;
unsigned int downthreshold;
unsigned int lower_bus_speed_mode;
#define MMC_SCALING_LOWER_DDR52_MODE 1
bool need_freq_change;
bool is_busy_started;
bool enable;
bool is_suspended;
};
struct mmc_host {
struct device *parent;
struct device class_dev;
struct mmc_devfeq_clk_scaling clk_scaling;
int index;
const struct mmc_host_ops *ops;
struct mmc_pwrseq *pwrseq;
@ -453,17 +347,10 @@ struct mmc_host {
#define MMC_CAP2_FULL_PWR_CYCLE (1 << 2) /* Can do full power cycle */
#define MMC_CAP2_HS200_1_8V_SDR (1 << 5) /* can support */
#define MMC_CAP2_HS200_1_2V_SDR (1 << 6) /* can support */
#define MMC_CAP2_SLEEP_AWAKE (1 << 7) /* Use Sleep/Awake (CMD5) */
/* use max discard ignoring max_busy_timeout parameter */
#define MMC_CAP2_MAX_DISCARD_SIZE (1 << 8)
#define MMC_CAP2_HS200 (MMC_CAP2_HS200_1_8V_SDR | \
MMC_CAP2_HS200_1_2V_SDR)
#define MMC_CAP2_CD_ACTIVE_HIGH (1 << 10) /* Card-detect signal active high */
#define MMC_CAP2_RO_ACTIVE_HIGH (1 << 11) /* Write-protect signal active high */
#define MMC_CAP2_PACKED_RD (1 << 12) /* Allow packed read */
#define MMC_CAP2_PACKED_WR (1 << 13) /* Allow packed write */
#define MMC_CAP2_PACKED_CMD (MMC_CAP2_PACKED_RD | \
MMC_CAP2_PACKED_WR)
#define MMC_CAP2_NO_PRESCAN_POWERUP (1 << 14) /* Don't power up before scan */
#define MMC_CAP2_HS400_1_8V (1 << 15) /* Can support HS400 1.8V */
#define MMC_CAP2_HS400_1_2V (1 << 16) /* Can support HS400 1.2V */
@ -480,31 +367,11 @@ struct mmc_host {
#define MMC_CAP2_CQE (1 << 23) /* Has eMMC command queue engine */
#define MMC_CAP2_CQE_DCMD (1 << 24) /* CQE can issue a direct command */
#define MMC_CAP2_AVOID_3_3V (1 << 25) /* Host must negotiate down from 3.3V */
#define MMC_CAP2_PACKED_WR_CONTROL (1 << 26) /* Allow write packed control */
#define MMC_CAP2_CLK_SCALE (1 << 27) /* Allow dynamic clk scaling */
#define MMC_CAP2_ASYNC_SDIO_IRQ_4BIT_MODE (1 << 28) /* Allow Async SDIO irq */
/* while card in 4-bit mode */
#define MMC_CAP2_NONHOTPLUG (1 << 29) /*Don't support hotplug*/
/* Some hosts need additional tuning */
#define MMC_CAP2_HS400_POST_TUNING (1 << 30)
#define MMC_CAP2_SANITIZE (1 << 31) /* Support Sanitize */
int fixed_drv_type; /* fixed driver type for non-removable media */
mmc_pm_flag_t pm_caps; /* supported pm features */
#ifdef CONFIG_MMC_CLKGATE
int clk_requests; /* internal reference counter */
unsigned int clk_delay; /* number MCI clk hold cycles */
bool clk_gated; /* clock gated */
struct workqueue_struct *clk_gate_wq; /* clock gate work queue */
struct delayed_work clk_gate_work; /* delayed clock gate */
unsigned int clk_old; /* old clock value cache */
spinlock_t clk_lock; /* lock for clk fields */
struct mutex clk_gate_mutex; /* mutex for clock gating */
struct device_attribute clkgate_delay_attr;
unsigned long clkgate_delay;
#endif
/* host specific block data */
unsigned int max_seg_size; /* see blk_queue_max_segment_size */
unsigned short max_segs; /* see blk_queue_max_segments */
@ -518,7 +385,6 @@ struct mmc_host {
spinlock_t lock; /* lock for claim and bus ops */
struct mmc_ios ios; /* current io bus settings */
struct mmc_ios cached_ios;
/* group bitfields together to minimize padding */
unsigned int use_spi_crc:1;
@ -554,11 +420,6 @@ struct mmc_host {
const struct mmc_bus_ops *bus_ops; /* current bus driver */
unsigned int bus_refs; /* reference counter */
unsigned int bus_resume_flags;
#define MMC_BUSRESUME_MANUAL_RESUME (1 << 0)
#define MMC_BUSRESUME_NEEDS_RESUME (1 << 1)
bool ignore_bus_resume_flags;
unsigned int sdio_irqs;
struct task_struct *sdio_irq_thread;
struct delayed_work sdio_irq_work;
@ -576,9 +437,6 @@ struct mmc_host {
struct dentry *debugfs_root;
bool err_occurred;
u32 err_stats[MMC_ERR_MAX];
/* Ongoing data transfer that allows commands during transfer */
struct mmc_request *ongoing_mrq;
@ -600,36 +458,12 @@ struct mmc_host {
bool cqe_enabled;
bool cqe_on;
/*
* Set to 1 to just stop the SDCLK to the card without
* actually disabling the clock from it's source.
*/
bool card_clock_off;
struct extcon_dev *extcon;
struct notifier_block card_detect_nb;
#ifdef CONFIG_MMC_PERF_PROFILING
struct {
unsigned long rbytes_drv; /* Rd bytes MMC Host */
unsigned long wbytes_drv; /* Wr bytes MMC Host */
ktime_t rtime_drv; /* Rd time MMC Host */
ktime_t wtime_drv; /* Wr time MMC Host */
ktime_t start;
} perf;
bool perf_enable;
#endif
struct mmc_trace_buffer trace_buf;
enum dev_state dev_status;
bool inlinecrypt_support; /* Inline encryption support */
bool crash_on_err; /* crash the system on error */
unsigned long private[0] ____cacheline_aligned;
};
struct device_node;
struct mmc_host *mmc_alloc_host(int extra, struct device *);
extern bool mmc_host_may_gate_card(struct mmc_card *card);
int mmc_add_host(struct mmc_host *);
void mmc_remove_host(struct mmc_host *);
void mmc_free_host(struct mmc_host *);
@ -646,22 +480,6 @@ static inline void *mmc_priv(struct mmc_host *host)
#define mmc_dev(x) ((x)->parent)
#define mmc_classdev(x) (&(x)->class_dev)
#define mmc_hostname(x) (dev_name(&(x)->class_dev))
#define mmc_bus_needs_resume(host) ((host)->bus_resume_flags & \
MMC_BUSRESUME_NEEDS_RESUME)
#define mmc_bus_manual_resume(host) ((host)->bus_resume_flags & \
MMC_BUSRESUME_MANUAL_RESUME)
static inline void mmc_set_bus_resume_policy(struct mmc_host *host, int manual)
{
if (manual)
host->bus_resume_flags |= MMC_BUSRESUME_MANUAL_RESUME;
else
host->bus_resume_flags &= ~MMC_BUSRESUME_MANUAL_RESUME;
}
extern int mmc_resume_bus(struct mmc_host *host);
extern int mmc_resume_bus(struct mmc_host *host);
void mmc_detect_change(struct mmc_host *, unsigned long delay);
void mmc_request_done(struct mmc_host *, struct mmc_request *);
@ -724,42 +542,7 @@ static inline int mmc_card_wake_sdio_irq(struct mmc_host *host)
return host->pm_flags & MMC_PM_WAKE_SDIO_IRQ;
}
static inline bool mmc_card_and_host_support_async_int(struct mmc_host *host)
{
return ((host->caps2 & MMC_CAP2_ASYNC_SDIO_IRQ_4BIT_MODE) &&
(host->card->cccr.async_intr_sup));
}
static inline void mmc_host_clear_sdr104(struct mmc_host *host)
{
host->caps &= ~MMC_CAP_UHS_SDR104;
}
static inline void mmc_host_set_sdr104(struct mmc_host *host)
{
host->caps |= MMC_CAP_UHS_SDR104;
}
#ifdef CONFIG_MMC_CLKGATE
void mmc_host_clk_hold(struct mmc_host *host);
void mmc_host_clk_release(struct mmc_host *host);
unsigned int mmc_host_clk_rate(struct mmc_host *host);
#else
static inline void mmc_host_clk_hold(struct mmc_host *host)
{
}
static inline void mmc_host_clk_release(struct mmc_host *host)
{
}
static inline unsigned int mmc_host_clk_rate(struct mmc_host *host)
{
return host->ios.clock;
}
#endif
/* TODO: Move to private header */
static inline int mmc_card_hs(struct mmc_card *card)
{
return card->host->ios.timing == MMC_TIMING_SD_HS ||
@ -773,8 +556,6 @@ static inline int mmc_card_uhs(struct mmc_card *card)
card->host->ios.timing <= MMC_TIMING_UHS_DDR50;
}
void mmc_retune_enable(struct mmc_host *host);
void mmc_retune_disable(struct mmc_host *host);
void mmc_retune_timer_stop(struct mmc_host *host);
static inline void mmc_retune_needed(struct mmc_host *host)

View File

@ -25,7 +25,66 @@
#define LINUX_MMC_MMC_H
#include <linux/types.h>
#include <uapi/linux/mmc/mmc.h>
/* Standard MMC commands (4.1) type argument response */
/* class 1 */
#define MMC_GO_IDLE_STATE 0 /* bc */
#define MMC_SEND_OP_COND 1 /* bcr [31:0] OCR R3 */
#define MMC_ALL_SEND_CID 2 /* bcr R2 */
#define MMC_SET_RELATIVE_ADDR 3 /* ac [31:16] RCA R1 */
#define MMC_SET_DSR 4 /* bc [31:16] RCA */
#define MMC_SLEEP_AWAKE 5 /* ac [31:16] RCA 15:flg R1b */
#define MMC_SWITCH 6 /* ac [31:0] See below R1b */
#define MMC_SELECT_CARD 7 /* ac [31:16] RCA R1 */
#define MMC_SEND_EXT_CSD 8 /* adtc R1 */
#define MMC_SEND_CSD 9 /* ac [31:16] RCA R2 */
#define MMC_SEND_CID 10 /* ac [31:16] RCA R2 */
#define MMC_READ_DAT_UNTIL_STOP 11 /* adtc [31:0] dadr R1 */
#define MMC_STOP_TRANSMISSION 12 /* ac R1b */
#define MMC_SEND_STATUS 13 /* ac [31:16] RCA R1 */
#define MMC_BUS_TEST_R 14 /* adtc R1 */
#define MMC_GO_INACTIVE_STATE 15 /* ac [31:16] RCA */
#define MMC_BUS_TEST_W 19 /* adtc R1 */
#define MMC_SPI_READ_OCR 58 /* spi spi_R3 */
#define MMC_SPI_CRC_ON_OFF 59 /* spi [0:0] flag spi_R1 */
/* class 2 */
#define MMC_SET_BLOCKLEN 16 /* ac [31:0] block len R1 */
#define MMC_READ_SINGLE_BLOCK 17 /* adtc [31:0] data addr R1 */
#define MMC_READ_MULTIPLE_BLOCK 18 /* adtc [31:0] data addr R1 */
#define MMC_SEND_TUNING_BLOCK 19 /* adtc R1 */
#define MMC_SEND_TUNING_BLOCK_HS200 21 /* adtc R1 */
/* class 3 */
#define MMC_WRITE_DAT_UNTIL_STOP 20 /* adtc [31:0] data addr R1 */
/* class 4 */
#define MMC_SET_BLOCK_COUNT 23 /* adtc [31:0] data addr R1 */
#define MMC_WRITE_BLOCK 24 /* adtc [31:0] data addr R1 */
#define MMC_WRITE_MULTIPLE_BLOCK 25 /* adtc R1 */
#define MMC_PROGRAM_CID 26 /* adtc R1 */
#define MMC_PROGRAM_CSD 27 /* adtc R1 */
/* class 6 */
#define MMC_SET_WRITE_PROT 28 /* ac [31:0] data addr R1b */
#define MMC_CLR_WRITE_PROT 29 /* ac [31:0] data addr R1b */
#define MMC_SEND_WRITE_PROT 30 /* adtc [31:0] wpdata addr R1 */
/* class 5 */
#define MMC_ERASE_GROUP_START 35 /* ac [31:0] data addr R1 */
#define MMC_ERASE_GROUP_END 36 /* ac [31:0] data addr R1 */
#define MMC_ERASE 38 /* ac R1b */
/* class 9 */
#define MMC_FAST_IO 39 /* ac <Complex> R4 */
#define MMC_GO_IRQ_STATE 40 /* bcr R5 */
/* class 7 */
#define MMC_LOCK_UNLOCK 42 /* adtc R1b */
/* class 8 */
#define MMC_APP_CMD 55 /* ac [31:16] RCA R1 */
#define MMC_GEN_CMD 56 /* adtc [0] RD/WR R1 */
/* class 11 */
#define MMC_QUE_TASK_PARAMS 44 /* ac [20:16] task id R1 */
@ -129,7 +188,6 @@ static inline bool mmc_op_multi(u32 opcode)
* OCR bits are mostly in host.h
*/
#define MMC_CARD_BUSY 0x80000000 /* Card Power up status bit */
#define MMC_CARD_SECTOR_ADDR 0x40000000 /* Card supports sectors */
/*
* Card Command Classes (CCC)
@ -233,7 +291,6 @@ static inline bool mmc_op_multi(u32 opcode)
#define EXT_CSD_PWR_CL_200_360 237 /* RO */
#define EXT_CSD_PWR_CL_DDR_52_195 238 /* RO */
#define EXT_CSD_PWR_CL_DDR_52_360 239 /* RO */
#define EXT_CSD_CACHE_FLUSH_POLICY 240 /* RO */
#define EXT_CSD_BKOPS_STATUS 246 /* RO */
#define EXT_CSD_POWER_OFF_LONG_TIME 247 /* RO */
#define EXT_CSD_GENERIC_CMD6_TIME 248 /* RO */
@ -257,8 +314,7 @@ static inline bool mmc_op_multi(u32 opcode)
* EXT_CSD field definitions
*/
#define EXT_CSD_WR_REL_PARAM_EN (1<<2)
#define EXT_CSD_WR_REL_PARAM_EN_RPMB_REL_WR (1<<4)
#define EXT_CSD_WR_REL_PARAM_EN (1<<2)
#define EXT_CSD_BOOT_WP_B_PWR_WP_DIS (0x40)
#define EXT_CSD_BOOT_WP_B_PERM_WP_DIS (0x10)
@ -331,9 +387,6 @@ static inline bool mmc_op_multi(u32 opcode)
#define EXT_CSD_PACKED_EVENT_EN BIT(3)
#define EXT_CSD_BKOPS_MANUAL_EN BIT(0)
#define EXT_CSD_BKOPS_AUTO_EN BIT(1)
/*
* EXCEPTION_EVENT_STATUS field
*/

View File

@ -1,46 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Copyright (c) 2017-2019, The Linux Foundation. All rights reserved.
*/
#ifndef __MMC_RING_BUFFER__
#define __MMC_RING_BUFFER__
#include <linux/mmc/card.h>
#include <linux/smp.h>
#include "core.h"
#define MMC_TRACE_RBUF_SZ_ORDER 2 /* 2^2 pages */
#define MMC_TRACE_RBUF_SZ (PAGE_SIZE * (1 << MMC_TRACE_RBUF_SZ_ORDER))
#define MMC_TRACE_EVENT_SZ 256
#define MMC_TRACE_RBUF_NUM_EVENTS (MMC_TRACE_RBUF_SZ / MMC_TRACE_EVENT_SZ)
struct mmc_host;
struct mmc_trace_buffer {
int wr_idx;
bool stop_tracing;
spinlock_t trace_lock;
char *data;
};
#ifdef CONFIG_MMC_RING_BUFFER
void mmc_stop_tracing(struct mmc_host *mmc);
void mmc_trace_write(struct mmc_host *mmc, const char *fmt, ...);
void mmc_trace_init(struct mmc_host *mmc);
void mmc_trace_free(struct mmc_host *mmc);
void mmc_dump_trace_buffer(struct mmc_host *mmc, struct seq_file *s);
#else
static inline void mmc_stop_tracing(struct mmc_host *mmc) {}
static inline void mmc_trace_write(struct mmc_host *mmc,
const char *fmt, ...) {}
static inline void mmc_trace_init(struct mmc_host *mmc) {}
static inline void mmc_trace_free(struct mmc_host *mmc) {}
static inline void mmc_dump_trace_buffer(struct mmc_host *mmc,
struct seq_file *s) {}
#endif
#define MMC_TRACE(mmc, fmt, ...) \
mmc_trace_write(mmc, fmt, ##__VA_ARGS__)
#endif /* __MMC_RING_BUFFER__ */

View File

@ -102,7 +102,6 @@
#define SDIO_BUS_WIDTH_1BIT 0x00
#define SDIO_BUS_WIDTH_RESERVED 0x01
#define SDIO_BUS_WIDTH_4BIT 0x02
#define SDIO_BUS_WIDTH_8BIT 0x03
#define SDIO_BUS_ECSI 0x20 /* Enable continuous SPI interrupt */
#define SDIO_BUS_SCSI 0x40 /* Support continuous SPI interrupt */
@ -164,10 +163,6 @@
#define SDIO_DTSx_SET_TYPE_A (1 << SDIO_DRIVE_DTSx_SHIFT)
#define SDIO_DTSx_SET_TYPE_C (2 << SDIO_DRIVE_DTSx_SHIFT)
#define SDIO_DTSx_SET_TYPE_D (3 << SDIO_DRIVE_DTSx_SHIFT)
#define SDIO_CCCR_INTERRUPT_EXTENSION 0x16
#define SDIO_SUPPORT_ASYNC_INTR (1<<0)
#define SDIO_ENABLE_ASYNC_INTR (1<<1)
/*
* Function Basic Registers (FBR)
*/

View File

@ -35,7 +35,5 @@ int mmc_gpio_set_cd_wake(struct mmc_host *host, bool on);
void mmc_gpiod_request_cd_irq(struct mmc_host *host);
bool mmc_can_gpio_cd(struct mmc_host *host);
bool mmc_can_gpio_ro(struct mmc_host *host);
void mmc_register_extcon(struct mmc_host *host);
void mmc_unregister_extcon(struct mmc_host *host);
#endif

View File

@ -187,152 +187,7 @@ TRACE_EVENT(mmc_request_done,
__entry->hold_retune, __entry->retune_period)
);
TRACE_EVENT(mmc_cmd_rw_start,
TP_PROTO(unsigned int cmd, unsigned int arg, unsigned int flags),
TP_ARGS(cmd, arg, flags),
TP_STRUCT__entry(
__field(unsigned int, cmd)
__field(unsigned int, arg)
__field(unsigned int, flags)
),
TP_fast_assign(
__entry->cmd = cmd;
__entry->arg = arg;
__entry->flags = flags;
),
TP_printk("cmd=%u,arg=0x%08x,flags=0x%08x",
__entry->cmd, __entry->arg, __entry->flags)
);
TRACE_EVENT(mmc_cmd_rw_end,
TP_PROTO(unsigned int cmd, unsigned int status, unsigned int resp),
TP_ARGS(cmd, status, resp),
TP_STRUCT__entry(
__field(unsigned int, cmd)
__field(unsigned int, status)
__field(unsigned int, resp)
),
TP_fast_assign(
__entry->cmd = cmd;
__entry->status = status;
__entry->resp = resp;
),
TP_printk("cmd=%u,int_status=0x%08x,response=0x%08x",
__entry->cmd, __entry->status, __entry->resp)
);
TRACE_EVENT(mmc_data_rw_end,
TP_PROTO(unsigned int cmd, unsigned int status),
TP_ARGS(cmd, status),
TP_STRUCT__entry(
__field(unsigned int, cmd)
__field(unsigned int, status)
),
TP_fast_assign(
__entry->cmd = cmd;
__entry->status = status;
),
TP_printk("cmd=%u,int_status=0x%08x",
__entry->cmd, __entry->status)
);
DECLARE_EVENT_CLASS(mmc_adma_class,
TP_PROTO(unsigned int cmd, unsigned int len),
TP_ARGS(cmd, len),
TP_STRUCT__entry(
__field(unsigned int, cmd)
__field(unsigned int, len)
),
TP_fast_assign(
__entry->cmd = cmd;
__entry->len = len;
),
TP_printk("cmd=%u,sg_len=0x%08x", __entry->cmd, __entry->len)
);
DEFINE_EVENT(mmc_adma_class, mmc_adma_table_pre,
TP_PROTO(unsigned int cmd, unsigned int len),
TP_ARGS(cmd, len));
DEFINE_EVENT(mmc_adma_class, mmc_adma_table_post,
TP_PROTO(unsigned int cmd, unsigned int len),
TP_ARGS(cmd, len));
TRACE_EVENT(mmc_clk,
TP_PROTO(char *print_info),
TP_ARGS(print_info),
TP_STRUCT__entry(
__string(print_info, print_info)
),
TP_fast_assign(
__assign_str(print_info, print_info);
),
TP_printk("%s",
__get_str(print_info)
)
);
DECLARE_EVENT_CLASS(mmc_pm_template,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs),
TP_STRUCT__entry(
__field(s64, usecs)
__field(int, err)
__string(dev_name, dev_name)
),
TP_fast_assign(
__entry->usecs = usecs;
__entry->err = err;
__assign_str(dev_name, dev_name);
),
TP_printk(
"took %lld usecs, %s err %d",
__entry->usecs,
__get_str(dev_name),
__entry->err
)
);
DEFINE_EVENT(mmc_pm_template, mmc_runtime_suspend,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, mmc_runtime_resume,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, mmc_suspend,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, mmc_resume,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, sdhci_msm_suspend,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, sdhci_msm_resume,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, sdhci_msm_runtime_suspend,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
DEFINE_EVENT(mmc_pm_template, sdhci_msm_runtime_resume,
TP_PROTO(const char *dev_name, int err, s64 usecs),
TP_ARGS(dev_name, err, usecs));
#endif /* if !defined(_TRACE_MMC_H) || defined(TRACE_HEADER_MULTI_READ) */
#endif /* _TRACE_MMC_H */
/* This part must be outside protection */
#include <trace/define_trace.h>