【MTK Sensor2.0】SCP与AP数据流分析

这篇具有很好参考价值的文章主要介绍了【MTK Sensor2.0】SCP与AP数据流分析。希望对大家有所帮助。如果存在错误或未考虑完全的地方,请大家不吝赐教,您也可以点击"举报违法"按钮提交疑问。

相关文章点击【sensor2.0】【SCP】数据与控制信息传递简析
-----欢迎点赞,收藏和关注------

IPI通信API

AP侧
/*
 * API let apps can register an ipi handler to receive IPI
 * @param id:	   IPI ID
 * @param handler:  IPI handler
 * @param name:	 IPI name
 */
enum scp_ipi_status scp_ipi_registration(enum ipi_id id,
	void (*ipi_handler)(int id, void *data, unsigned int len),
	const char *name);

/*
 * API for apps to send an IPI to scp
 * @param id:   IPI ID
 * @param buf:  the pointer of data
 * @param len:  data length
 * @param wait: If true, wait (atomically) until data have been gotten by Host
 * @param len:  data length
 */
enum scp_ipi_status scp_ipi_send(enum ipi_id id, void *buf,
	unsigned int  len, unsigned int wait, enum scp_core_id scp_id);
SCP侧
ipi_status scp_ipi_registration(enum ipi_id id, ipi_handler_t handler,const char *name);
ipi_status scp_ipi_send(enum ipi_id id, void* buf, uint32_t len, uint32_t wait,enum ipi_dir dir);
举例说明

假设AP侧注册了一个ID为IPI_CHRE,名称为chre_ap_txrx的IPI;SCP侧注册了一个ID为IPI_CHRE,名称为chre_scp_txrx的IPI。当chre_ap_txrx调用scp_ipi_send()发送数据后,chre_scp_txrx的handler会收到数据并处理。当chre_scp_txrx调用scp_ipi_send()发送数据后,chre_ap_txrx的ipi_handler会收到数据并处理。

AP侧驱动分析

文件路径:kernel-4.14\drivers\misc\mediatek\sensor

AP端主要是负责完成对上层的接口和对SCP的交互。对接上层的主要是hf_manager驱动,和SCP交互的主要是mtk_nanohub驱动。分析驱动主要是看几个结构体,结构体承载着通信过程中的各种数据和操作。

hf_manager_fops

hf_manager驱动创建字符设备hf_manager,共上层和kernel通信。如下是hf_manager的一系列文件操作函数:

static const struct file_operations hf_manager_fops = {
	.owner          = THIS_MODULE,
	.open           = hf_manager_open,
	.release        = hf_manager_release,
	.read           = hf_manager_read,
	.write          = hf_manager_write,
	.poll           = hf_manager_poll,
	.unlocked_ioctl = hf_manager_ioctl,
	.compat_ioctl   = hf_manager_ioctl,
};
mtk_nanohub_dev

该结构体在mtk_nanohub驱动中被声明创建

static struct mtk_nanohub_device *mtk_nanohub_dev;
struct mtk_nanohub_device {
	struct hf_device hf_dev;
......
	struct sensor_fifo *scp_sensor_fifo;
	struct curr_wp_queue wp_queue;
......
};

结构体成员mtk_nanohub->wp_queue是用来存放从共享DRAM中读取的数据的,wp_queue在mtk_nanohub驱动被加载时就会初始化。

mtk_nanohub->hf_dev

结构体成员mtk_nanohub->hf_dev是在mtk_nanohub驱动创建的线程的工作函数mtk_nanohub_power_up_work中完成的。hf_dev中包含了一系列对SCP发命令的操作函数(如enable,batch);hf_dev->support_lis包含了所有sensor的信息,hf_dev->support_lis内容是被mtk_nanohub_get_sensor_info()函数填充的。

//mtk_nanohub_power_up_work() -> mtk_nanohub_power_up_loop() -> mtk_nanohub_create_manager()
static int mtk_nanohub_create_manager(void)
{
	int err = 0;
	struct hf_device *hf_dev = &mtk_nanohub_dev->hf_dev;
	struct mtk_nanohub_device *device = mtk_nanohub_dev;

	if (likely(atomic_xchg(&device->create_manager_first_boot, 1)))
		return 0;

	memset(hf_dev, 0, sizeof(*hf_dev));

	mtk_nanohub_get_sensor_info();

	hf_dev->dev_name = "mtk_nanohub";
	hf_dev->device_poll = HF_DEVICE_IO_INTERRUPT;
	hf_dev->device_bus = HF_DEVICE_IO_ASYNC;
	hf_dev->support_list = support_sensors;
	hf_dev->support_size = support_size;
	hf_dev->enable = mtk_nanohub_enable;
	hf_dev->batch = mtk_nanohub_batch;
	hf_dev->flush = mtk_nanohub_flush;
	hf_dev->calibration = mtk_nanohub_calibration;
	hf_dev->config_cali = mtk_nanohub_config;
	hf_dev->selftest = mtk_nanohub_selftest;
	hf_dev->rawdata = mtk_nanohub_rawdata;
	hf_dev->custom_cmd = mtk_nanohub_custom_cmd;

	err = hf_manager_create(hf_dev);
......
}

mtk_nanohub->hf_dev被传入hf_manager_create()函数,创建一个struct hf_manager *manager结构体,通过manager结构体把mtk_nanohub->hf_dev和hfcore结构体关联起来,存入到mtk_nanohub->hf_dev结构体成员manager中,并完善了manager结构体。

int hf_manager_create(struct hf_device *device)
{
	uint8_t sensor_type = 0;
	int i = 0, err = 0;
	uint32_t gain = 0;
	struct hf_manager *manager = NULL;
...
	manager = kzalloc(sizeof(*manager), GFP_KERNEL);
	if (!manager)
		return -ENOMEM;

	manager->hf_dev = device;
	manager->core = &hfcore;
	device->manager = manager;
...
	manager->report = hf_manager_io_report;
	manager->complete = hf_manager_io_complete;
...
	INIT_LIST_HEAD(&manager->list);
	mutex_lock(&manager->core->manager_lock);
	list_add(&manager->list, &manager->core->manager_list);
	mutex_unlock(&manager->core->manager_lock);
...
}

最后把manager结构体加入到了hfcore->manager_list链表中。

hfcore

hfcore结构体定义在hf_manager驱动中,hfcore结构体在hf_manager驱动初始化时被init_hf_core()函数初始化。

static struct hf_core hfcore;
static void init_hf_core(struct hf_core *core)
{
	int i = 0;

	mutex_init(&core->manager_lock);
	INIT_LIST_HEAD(&core->manager_list);
	for (i = 0; i < SENSOR_TYPE_SENSOR_MAX; ++i) {
		core->state[i].delay = S64_MAX;
		core->state[i].latency = S64_MAX;
		atomic64_set(&core->state[i].start_time, S64_MAX);
	}

	spin_lock_init(&core->client_lock);
	INIT_LIST_HEAD(&core->client_list);

	kthread_init_worker(&core->kworker);
}

从代码中可以看到,使用hfcore成员client_list和manager_list创建了链表头。 hfcore的核心作用就是用来关联【上层open hf_manager节点时创建的struct hf_client结构体】和【SCP初始化完成后由mtk_nanohub驱动创建的struct hf_manager结构体】。上层open hf_manager节点会调用如下函数:

struct hf_client *hf_client_create(void)
{
	unsigned long flags;
	struct hf_client *client = NULL;
	struct hf_client_fifo *hf_fifo = NULL;

	client = kzalloc(sizeof(*client), GFP_KERNEL);
	if (!client)
		goto err_out;

	/* record process id and thread id for debug */
	strlcpy(client->proc_comm, current->comm, sizeof(client->proc_comm));
	client->leader_pid = current->group_leader->pid;
	client->pid = current->pid;
	client->core = &hfcore;

#ifdef HF_MANAGER_DEBUG
	pr_notice("Client create\n");
#endif

	INIT_LIST_HEAD(&client->list);

	hf_fifo = &client->hf_fifo;
	hf_fifo->head = 0;
	hf_fifo->tail = 0;
	hf_fifo->bufsize = roundup_pow_of_two(HF_CLIENT_FIFO_SIZE);
	hf_fifo->buffull = false;
	spin_lock_init(&hf_fifo->buffer_lock);
	init_waitqueue_head(&hf_fifo->wait);
	hf_fifo->buffer =
		kcalloc(hf_fifo->bufsize, sizeof(*hf_fifo->buffer),
			GFP_KERNEL);
	if (!hf_fifo->buffer)
		goto err_free;

	spin_lock_init(&client->request_lock);

	spin_lock_irqsave(&client->core->client_lock, flags);
	list_add(&client->list, &client->core->client_list);//加入链表
	spin_unlock_irqrestore(&client->core->client_lock, flags);

	return client;
...
}

从代码可以看到,hfcore结构体被保存到了创建的struct hf_client *client中,并且把创建的client加入到hfcore->client_list链表中。

g_nanohub_data_p指针

该指针定义在 2.0\mtk_nanohub\nanohub\main.c文件中:

static struct nanohub_data *g_nanohub_data_p;

g_nanohub_data_p指针的赋值要从nanohub驱动(在 2.0\mtk_nanohub\nanohub\main.c文件)看起,驱动初始化时调用nanohub_ipi_init()函数注册了nanohub_ipi平台设备驱动。

static int __init nanohub_init(void)
{
	int ret = 0;
...
#ifdef CONFIG_NANOHUB_MTK_IPI
		ret = nanohub_ipi_init();
#endif
	return ret;
}

int nanohub_ipi_init(void)
{
...
	ret = platform_device_register(&nanohub_ipi_pdev);
...
	ret = platform_driver_register(&nanohub_ipi_pdrv);
...
}

驱动和设备匹配后,进入到nanohub_ipi_probe()函数,g_nanohub_data_p在nanohub_probe()函数中被操作指向nano_dev->drv_data结构体,也即g_nanohub_data_p = &ipi_data->data

int nanohub_ipi_probe(struct platform_device *pdev)
{
	struct nanohub_ipi_data *ipi_data;
	struct nanohub_device *nano_dev;
	enum scp_ipi_status status;

	nano_dev = kzalloc(sizeof(*nano_dev), GFP_KERNEL);
	if (!nano_dev)
		return -ENOMEM;
	ipi_data = kzalloc(sizeof(*ipi_data), GFP_KERNEL);
	if (!ipi_data) {
		kfree(nano_dev);
		return -ENOMEM;
	}
	ipi_data->nanohub_dev = nano_dev;
	nano_dev->drv_data = &ipi_data->data;
	nanohub_probe(&pdev->dev, nano_dev);//g_nanohub_data_p在该函数被赋值
	platform_set_drvdata(pdev, &ipi_data->data.free_pool);
	g_nanohub_ipi_data = ipi_data;

	nanohub_ipi_comms_init(ipi_data);
	init_completion(&nanohub_ipi_rx.isr_comp);
	status = scp_ipi_registration(IPI_CHRE,
		scp_to_ap_ipi_handler, "chre_ap_rx");
	/*init nano scp ipi status*/
	WRITE_ONCE(scp_nano_ipi_status, 1);
	scp_A_register_notify(&nano_ipi_notifier);

	return 0;
}

接下来查看nanohub_ipi_comms_init()函数,函数对g_nanohub_data_p指向的结构体成员comms进行了一系列赋值。后续分析会使用到如下部分函数。

static void nanohub_ipi_comms_init(struct nanohub_ipi_data *ipi_data)
{
	struct nanohub_comms *comms = &ipi_data->data.comms;

	comms->seq = 1;
	comms->timeout_write = msecs_to_jiffies(512);
	comms->timeout_ack = msecs_to_jiffies(3);
	comms->timeout_reply = msecs_to_jiffies(3);
	comms->open = nanohub_ipi_open;
	comms->close = nanohub_ipi_close;
	comms->write = nanohub_ipi_write;
	comms->read = nanohub_ipi_read;
/*	comms->tx_buffer = kmalloc(4096, GFP_KERNEL | GFP_DMA); */
	comms->rx_buffer = kmalloc(4096, GFP_KERNEL | GFP_DMA);
	WARN_ON(comms->rx_buffer == NULL);
	nanohub_ipi_rx.buff = comms->rx_buffer;
	sema_init(&scp_nano_ipi_sem, 1);
}

SCP侧驱动分析

数据接收相关变量
//vendor\mediatek\proprietary\hardware\contexthub\firmware\links\plat\src\hostIntfIPI.c
static void *gRxBuf;
HostIntfCommCallbackF IPI_Rx_callback;

//vendor\mediatek\proprietary\hardware\contexthub\firmware\src\hostIntf.c
__attribute__((aligned(4))) static uint8_t mRxBuf[NANOHUB_PACKET_SIZE_MAX];

SCP侧先看上述三个变量,mRxBuf数组是用来存放AP侧传过来的数据的,而gRxBuf指针最后会指向mRxBuf数组。接下来是对三个变量的说明。

hostIntf APP从hostIntfRequest()开始运行,调用platHostIntfInit()函数把初始化好的struct HostIntfComm gIPIComm赋值给mComm,然后开始调用hostIntfIPIRxPacket()函数。

static bool hostIntfRequest(uint32_t tid)
{
    mHostIntfTid = tid;
    atomicBitsetInit(mInterrupt, HOSTINTF_MAX_INTERRUPTS);
    atomicBitsetInit(mInterruptMask, HOSTINTF_MAX_INTERRUPTS);
#ifdef AP_INT_NONWAKEUP
    hostIntfSetInterruptMask(NANOHUB_INT_NONWAKEUP);
#endif
    mTxBuf.prePreamble = NANOHUB_PREAMBLE_BYTE;
    mTxBuf.postPreamble = NANOHUB_PREAMBLE_BYTE;

    mComm = platHostIntfInit();
    if (mComm) {
        int err = mComm->request();
        if (!err) {
            nanohubInitCommand();
            mComm->rxPacket(mRxBuf, sizeof(mRxBuf), hostIntfRxDone);
            osEventSubscribe(mHostIntfTid, EVT_APP_START);
            return true;
        }
    }

    return false;
}

//vendor\mediatek\proprietary\hardware\contexthub\firmware\links\plat\src\hostIntfIPI.c
static const struct HostIntfComm gIPIComm = {
    .request = hostIntfIPIRequest,
    .rxPacket = hostIntfIPIRxPacket,
    .txPacket = hostIntfIPITxPacket,
    .release = hostIntfIPIRelease,
};

在hostIntfIPIRxPacket()函数中可以看到,gRxBuf指向传过来的参数rxBuf,而rxBuf就是mRxBuf数组。IPI_Rx_callback函数指针指向hostIntfRxDone函数。

static int hostIntfIPIRxPacket(void *rxBuf, size_t rxSize,
                               HostIntfCommCallbackF callback)
{
    gRxBuf = rxBuf;
    gRxSize = rxSize;
    IPI_Rx_callback = callback;
...
    return 0;   //todo: error handle
}

AP传递数据到SCP

以使能光感为例分析

AP侧流向

这里直接从hf_manager字符设备的操作函数hf_manager_write()出发,client结构体是open hf_manager节点时创建的,上层传下来的数据放到struct hf_manager_cmd cmd结构体中。

static ssize_t hf_manager_write(struct file *filp,
		const char __user *buf, size_t count, loff_t *f_pos)
{
	struct hf_manager_cmd cmd;
	struct hf_client *client = filp->private_data;
	memset(&cmd, 0, sizeof(cmd));
...
	if (copy_from_user(&cmd, buf, count))
		return -EFAULT;
	return hf_manager_drive_device(client, &cmd);
}

接下来进入hf_manager_drive_device()函数,调用hf_manager_find_manager()函数通过hfcore->manager_list链表找到之前初始化好的mtk_nanohub->hf_dev成员manager结构体,struct hf_device *device结构体指向mtk_nanohub->hf_dev。

static int hf_manager_drive_device(struct hf_client *client,
		struct hf_manager_cmd *cmd)
{
	int err = 0;
	struct sensor_state old;
	struct hf_manager *manager = NULL;
	struct hf_device *device = NULL;
	struct hf_core *core = client->core;
	uint8_t sensor_type = cmd->sensor_type;

	if (unlikely(sensor_type >= SENSOR_TYPE_SENSOR_MAX))
		return -EINVAL;

	mutex_lock(&core->manager_lock);
	manager = hf_manager_find_manager(core, sensor_type);
...
	device = manager->hf_dev;
...
	switch (cmd->action) {
	case HF_MANAGER_SENSOR_ENABLE:
	case HF_MANAGER_SENSOR_DISABLE:
		hf_manager_update_client_param(client, cmd, &old);
		err = hf_manager_device_enable(device, sensor_type);
		if (err < 0)
			hf_manager_clear_client_param(client, cmd, &old);
		break;
	...
	}
	mutex_unlock(&core->manager_lock);
	return err;
}

根据上层的数据,先调用hf_manager_update_client_param函数更新参数到client结构体的request成员中,然后开始调用hf_manager_device_enable()函数。

hf_manager_device_enable()函数中通过hf_manager_find_best_param()把更新的参数保存到对应变量中,然后根据参数的变化情况调用对应的函数,这里追踪下代码中device->enable()调用。下面函数传进来的device其实就是mtk_nanohub_dev->hf_dev结构体,而这个结构体已被初始化完成(在<AP端驱动分析>小节)。

static int hf_manager_device_enable(struct hf_device *device,
		uint8_t sensor_type)
{
	int err = 0;
	struct sensor_state old;
	struct hf_manager *manager = device->manager;
	struct hf_core *core = device->manager->core;
	bool best_enable = false;
	int64_t best_delay = S64_MAX;
	int64_t best_latency = S64_MAX;
...
	hf_manager_find_best_param(core, sensor_type, &best_enable,
		&best_delay, &best_latency);

	if (best_enable) {
		device_request_update(core, sensor_type, &old);
		if (device_rebatch(core, sensor_type,
				best_delay, best_latency)) {
			err = device->batch(device, sensor_type,
				best_delay, best_latency);
...
		}
		if (device_reenable(core, sensor_type, best_enable)) {
			/* must update io_enabled before enable */
			atomic_inc(&manager->io_enabled);
			err = device->enable(device, sensor_type, best_enable);
...
		}
...
	} else {
...
	}
...
}

调用device->enable()实际调用的为mtk_nanohub_enable()函数,继续追踪可以看到进入了如下函数:

//mtk_nanohub_enable() -> mtk_nanohub_enable_to_hub()
int mtk_nanohub_enable_to_hub(uint8_t sensor_id, int enabledisable)
{
	uint8_t sensor_type = id_to_type(sensor_id);
	struct ConfigCmd cmd;
	int ret = 0;
...
	sensor_state[sensor_type].enable = enabledisable;
	init_sensor_config_cmd(&cmd, sensor_type);
	if (atomic_read(&power_status) == SENSOR_POWER_UP) {
		ret = nanohub_external_write((const uint8_t *)&cmd,
			sizeof(struct ConfigCmd));
		if (ret < 0)
			pr_err("fail enable: [%d,%d]\n", sensor_id, cmd.cmd);
	}
...
}

通过init_sensor_config_cmd()函数填充了cmd结构体,主要看下cmd的如下三个成员的情况:

cmd->evtType = EVT_NO_SENSOR_CONFIG_EVENT;//事件类型,SCP会用到
cmd->sensorType = SENSOR_TYPE_LIGHT;//以光感举例
cmd->cmd = CONFIG_CMD_ENABLE;//使能命令

然后调用nanohub_external_write()函数,在函数中可以看到之前讲过的g_nanohub_data_p指针,还有一个重要的宏CMD_COMMS_WRITE,在SCP侧会用到该宏的值。

#define CMD_COMMS_WRITE			0x00001091
ssize_t nanohub_external_write(const char *buffer, size_t length)
{
	struct nanohub_data *data = g_nanohub_data_p;
	int ret;
	u8 ret_data;
...
	if (nanohub_comms_tx_rx_retrans
		(data, CMD_COMMS_WRITE, buffer, length, &ret_data,
		sizeof(ret_data), false,
		10, 10) == sizeof(ret_data)) {
		if (ret_data)
			ret = length;
		else
			ret = 0;
	} else {
		ret = ERROR_NACK;
	}
...
}

继续跟代码会看到最终把数据全传给了nanohub_ipi_write()函数,该函数为g_nanohub_data_p指向的结构体成员的成员函数。

int nanohub_ipi_write(void *data, u8 *tx, int length, int timeout)
{
	int ret;
	int retry = NANOHUB_IPI_SEND_RETRY;
...
	ret = SCP_IPI_ERROR;
	while (retry-- && (READ_ONCE(scp_nano_ipi_status) == 1)) {
		ret = scp_ipi_send(IPI_CHRE, tx, length, 0, SCP_A_ID);
		if (ret != SCP_IPI_BUSY)
			break;
		usleep_range(100, 200);
	}
...
}

可以看到数据最终被scp_ipi_send()函数传递给到SCP侧。

SCP侧流向

根据AP侧IPI送的信息,找到注册ID为IPI_CHRE的IPI的handler函数:

static void chre_ipi_rxhandler(int id, void * data, unsigned int len)
{
...
    if (len <= NANOHUB_PACKET_SIZE_MAX && gRxBuf) {
        gRxSize = len;
        memcpy(gRxBuf, data, gRxSize);
        IPI_Rx_callback(gRxSize, 0);    //todo: return error code
    } else
        osLog(LOG_ERROR, "len %u > %u, gRxBuf %p\n", len, NANOHUB_PACKET_SIZE_MAX, gRxBuf);
}

可以看到AP侧传过来的数据拷贝到了gRxBuf中,gRxBuf是指向mRxBuf数组的指针,那么数据就存放到mRxBuf数组中了。然后开始调用IPI_Rx_callback(),而IPI_Rx_callback是指向hostIntfRxDone()函数的函数指针。hostIntfRxDone()函数开始被调用,跟着调用链走,最终调用的是hostIntfGenerateAck()函数:

static void hostIntfGenerateAck(void *cookie)
{
    uint32_t seq = 0;
    void *txPayload = hostIntfGetPayload(mTxBuf.buf);
    void *rxPayload = hostIntfGetPayload(mRxBuf);
    uint8_t rx_len = hostIntfGetPayloadLen(mRxBuf);
    uint32_t resp = NANOHUB_FAST_UNHANDLED_ACK;

    atomicWrite32bits(&mActiveWrite, true);
    hostIntfSetInterrupt(NANOHUB_INT_WAKE_COMPLETE);
    mRxCmd = hostIntfFindHandler(mRxBuf, mRxSize, &seq);

    if (mRxCmd) {
        if (mTxRetrans.seqMatch) {
            hostIntfTxBuf(mTxSize, &mTxBuf.prePreamble, hostIntfTxPayloadDone);
        } else {
            mTxRetrans.seq = seq;
            mTxRetrans.cmd = mRxCmd;
            if (mRxCmd->fastHandler)
                resp = mRxCmd->fastHandler(rxPayload, rx_len, txPayload, mRxTimestamp);

            hostIntfTxSendAck(resp);
        }
    } else {
        if (mBusy)
            hostIntfTxPacket(NANOHUB_REASON_NAK_BUSY, 0, seq, hostIntfTxAckDone);
        else
            hostIntfTxPacket(NANOHUB_REASON_NAK, 0, seq, hostIntfTxAckDone);
    }
}

//firmware\src\nanohubCommand.c
const static struct NanohubCommand mBuiltinCommands[] = {
    NANOHUB_COMMAND(NANOHUB_REASON_GET_OS_HW_VERSIONS,
                    getOsHwVersion,
                    getOsHwVersion,
                    struct NanohubOsHwVersionsRequest,
                    struct NanohubOsHwVersionsRequest),
...
    NANOHUB_COMMAND(NANOHUB_REASON_WRITE_EVENT,
                    writeEvent,
                    writeEvent,
                    __le32,
                    struct NanohubWriteEventRequest),
};

在函数中通过hostIntfFindHandler()函数从定义好的struct NanohubCommand mBuiltinCommands[]结构体中找到对应的一组结构体,而定位到是哪组结构体的信息就是之前提到的宏CMD_COMMS_WRITE,CMD_COMMS_WRITE的值和NANOHUB_REASON_WRITE_EVENT的值一样。找到结构体之后,开始调用结构体成员函数writeEvent。

static uint32_t writeEvent(void *rx, uint8_t rx_len, void *tx, uint64_t timestamp)
{
    struct NanohubWriteEventRequest *req = rx;
    struct NanohubWriteEventResponse *resp = tx;
    uint8_t *packet;
    struct HostHubRawPacket *rawPacket;
    uint32_t tid;
    EventFreeF free = slabFree;
    if (le32toh(req->evtType) == EVT_APP_FROM_HOST) {
...
    } else {
        packet = slabAllocatorAlloc(mEventSlab);
        if (!packet) {
            packet = heapAlloc(rx_len - sizeof(req->evtType));
            free = heapFree;
        }
        if (!packet) {
            resp->accepted = false;
        } else {
            memcpy(packet, req->evtData, rx_len - sizeof(req->evtType));
            resp->accepted = osEnqueueEvtOrFree(le32toh(req->evtType), packet, free);//把事件加入到APP 事件队列中
        }
    }

    return sizeof(*resp);
}

writeEvent()函数根据AP侧数据中包含的事件类型,把事件加入到APP事件队列中。AP侧传下来的事件为EVT_NO_SENSOR_CONFIG_EVENT,最终被hostIntf APP接收到,在hostIntfHandleEvent()函数中根据事件信息处理数据,最后通知sensor驱动完成相关操作(hostIntf APP对事件的处理详情,请阅读《【sensor2.0】【SCP】数据与控制信息传递简析》)。

SCP传递数据到AP

以光感数据上报为例分析

建议先阅读《【sensor2.0】【SCP】数据与控制信息传递简析》文章

光感采集完数据后,向alsps APP广播一个EVENT_DATA事件,alsps收到后,通过osEnqueueEvt()函数把光感数据事件加入到APP事件队列中。光感数据事件被hostIntf APP接收到进入hostIntfHandleEvent()函数处理事件

static void hostIntfHandleEvent(uint32_t evtType, const void* evtData)
{
    ...
	if (evtType == EVT_APP_START) {...}
    ...
    else if (evtType > EVT_NO_FIRST_SENSOR_EVENT && evtType < EVT_NO_SENSOR_CONFIG_EVENT && mSensorList[(evtType & 0xFF)-1] < MAX_REGISTERED_SENSORS) { .../*sensor数据在这个条件中处理*/}
    ...
}

在hostIntfHandleEvent()函数中最终把事件传给simpleQueueEnqueue()函数,这里不仔细去了解每个函数的内容直接把调用链放到下面,分析最后几个函数

/*调用链:simpleQueueEnqueue() -> SensorQueueEnqueue() -> hostIntfTransferData() -> contextHubFormateData() -> contextHubTransferOnChangeSensor()*/
static void contextHubTransferOnChangeSensor(uint8_t mtk_type, const struct mtkActiveSensor *sensor)
{
    int ret = 0;
    uint32_t numSamples = 0, numFlushes = 0;
    struct data_unit_t dummy;
    uint64_t lastTimeStamp = 0;
    bool doSend = true;

    memset(&dummy, 0, sizeof(struct data_unit_t));

    /* report data to ap firstly, numSamples represent data */
    for (numSamples = 0; numSamples < sensor->buffer.firstSample.numSamples; ++numSamples) {
        dummy.sensor_type = mtkTypeToApId(mtk_type);
        dummy.flush_action = DATA_ACTION;
        if (numSamples == 0) {
            lastTimeStamp = dummy.time_stamp = sensor->buffer.referenceTime;
        } else {
            if (sensor->numAxis == NUM_AXIS_THREE)
                dummy.time_stamp = lastTimeStamp + sensor->buffer.triple[numSamples].deltaTime;
            else
                dummy.time_stamp = lastTimeStamp + sensor->buffer.single[numSamples].deltaTime;
            lastTimeStamp = dummy.time_stamp;
        }
        switch (mtk_type) {
            case SENSOR_TYPE_PROXIMITY:
                /* we don't support this sensType access dram to give data to ap by notify ap */
                dummy.proximity_t.oneshot = (int)sensor->buffer.single[numSamples].fdata;
                break;
			...
            case SENSOR_TYPE_LIGHT:
                dummy.light = (uint32_t)sensor->buffer.single[numSamples].fdata;
                break;
            ...
        }
        if (doSend) {
            ret = contextHubSramFifoWrite(&dummy);
            if (ret == SRAM_FIFO_FULL)
                contextHubSramFifoRead();
        }
    }
    /* report flush to ap secondly, numFlushes represent flush */
    for (numFlushes = 0; numFlushes < sensor->buffer.firstSample.numFlushes; ++numFlushes) {
        dummy.sensor_type = mtkTypeToApId(mtk_type);
        dummy.flush_action = FLUSH_ACTION;
        osLog(LOG_INFO, "mtk_type: %d send flush action\n", mtk_type);
        ret = contextHubSramFifoWrite(&dummy);
        if (ret == SRAM_FIFO_FULL)
            contextHubSramFifoRead();
    }
}

在函数中,通过contextHubSramFifoWrite()函数把数据暂时放到SRAM FIFO中,SRAM FIFO满了就调用contextHubSramFifoRead()函数。在contextHubSramFifoRead()函数中主要看两个函数:contextHubIpiNotifyAp()和contextHubDramFifoWrite()。contextHubDramFifoWrite()函数把SCP侧需要传递给AP侧的数据写到共享DRAM中,供AP读取。 contextHubIpiNotifyAp()函数实际作用是向APP事件队列添加一个事件EVT_IPI_TX,在这个事件中记住两个宏SENSOR_HUB_NOTIFY和SCP_FIFO_FULL,它们在AP侧会被用到。

static int contextHubSramFifoRead(void)
{
    int ret = 0;
    uint32_t realSizeLeft = 0, realSizeTx = 0;
    uint32_t realSizeLeftNum = 0, realSizeTxNum = 0;
    uint32_t currWp = 0;
    struct data_unit_t dummy;
    struct sensorFIFO *dram_fifo = NULL;

	...
	/* firstly, we should get how much space remain in dram, if dram is full, we fisrtly notify ap to copy data
	 */
    ret = contextHubDramFifoSizeLeft(&realSizeLeft);
    if (ret == DRAM_FIFO_FULL) {
	...
        if (mContextHubDramFifo.lastWpPointer != currWp) {
            mContextHubDramFifo.lastWpPointer = currWp;
            contextHubIpiNotifyAp(0, SENSOR_HUB_NOTIFY, SCP_FIFO_FULL, &dummy);
        }
    }
    realSizeLeftNum = realSizeLeft / SENSOR_DATA_SIZE;
    /* secondly, we should copy data to dram */
    if (realSizeLeftNum < realSizeTxNum) {
        osLog(LOG_INFO, "realSizeLeftNum(%lu) < realSizeTxNum(%lu)\n", realSizeLeftNum, realSizeTxNum);
        ret = contextHubDramFifoWrite((const uint8_t *)mContextHubSramFifo.ringBuf, realSizeLeft);
		...
    } else {
        /* update head pointer again, this case will copy full sram fifo data to dram, theb we should
         * let head and tail point to the buffer head
         */
        ret = contextHubDramFifoWrite((const uint8_t *)mContextHubSramFifo.ringBuf, realSizeTx);
		...
    }
    return ret;
}

EVT_IPI_TX事件被contextHubFw APP接收到,在contextHubFwHandleEvent()函数中处理,然后调用contextHubHandleIpiTxEvent()函数。函数通过scp_ipi_send()向AP发送一个IPI。

static int contextHubHandleIpiTxEvent(enum ipiTxEvent event)
{
	...
    ipi_ret = scp_ipi_send(IPI_SENSOR,
        (void *)&mContextHubIpi.ringBuf[mContextHubIpi.tail], SENSOR_IPI_SIZE, 0, IPI_SCP2AP);
	...
    return 0;
}

通过IPI ID,可以在AP侧找到IPI接收处理函数为mtk_nanohub_ipi_handler():

static void mtk_nanohub_ipi_handler(int id,
		void *data, unsigned int len)
{
	union SCP_SENSOR_HUB_DATA *rsp = (union SCP_SENSOR_HUB_DATA *)data;
	const struct mtk_nanohub_cmd *cmd;
	...
	cmd = mtk_nanohub_find_cmd(rsp->rsp.action);
	if (cmd != NULL)
		cmd->handler(rsp, len);
	else
		pr_err("cannot find cmd!\n");
}

static const struct mtk_nanohub_cmd mtk_nanohub_cmds[] = {
	MTK_NANOHUB_CMD(SENSOR_HUB_NOTIFY,
		mtk_nanohub_notify_cmd),
	MTK_NANOHUB_CMD(SENSOR_HUB_GET_DATA,
		mtk_nanohub_common_cmd),
	MTK_NANOHUB_CMD(SENSOR_HUB_SET_CONFIG,
		mtk_nanohub_common_cmd),
	MTK_NANOHUB_CMD(SENSOR_HUB_SET_CUST,
		mtk_nanohub_common_cmd),
	MTK_NANOHUB_CMD(SENSOR_HUB_SET_TIMESTAMP,
		mtk_nanohub_common_cmd),
	MTK_NANOHUB_CMD(SENSOR_HUB_RAW_DATA,
		mtk_nanohub_common_cmd),
};

函数mtk_nanohub_find_cmd()根据SCP侧上传的信息在struct mtk_nanohub_cmd mtk_nanohub_cmds[]结构体数组中寻找对应的结构体,然后调用结构体成员函数handler,这里对应调用的函数为mtk_nanohub_notify_cmd():

static void mtk_nanohub_notify_cmd(union SCP_SENSOR_HUB_DATA *rsp,
		unsigned int rx_len)
{
	unsigned long flags = 0;

	switch (rsp->notify_rsp.event) {
	case SCP_DIRECT_PUSH:
	case SCP_FIFO_FULL:
		mtk_nanohub_moving_average(rsp);
		mtk_nanohub_write_wp_queue(rsp);//
		WRITE_ONCE(chre_kthread_wait_condition, true);
		wake_up(&chre_kthread_wait);
		break;
	case SCP_NOTIFY:
		break;
	case SCP_INIT_DONE:
		spin_lock_irqsave(&scp_state_lock, flags);
		WRITE_ONCE(scp_chre_ready, true);
		if (READ_ONCE(scp_system_ready) && READ_ONCE(scp_chre_ready)) {
			spin_unlock_irqrestore(&scp_state_lock, flags);
			atomic_set(&power_status, SENSOR_POWER_UP);
			wake_up(&power_reset_wait);
		} else
			spin_unlock_irqrestore(&scp_state_lock, flags);
		break;
	default:
		break;
	}
}

根据SCP侧上传的信息,程序会进入SCP_FIFO_FULL条件,通过mtk_nanohub_write_wp_queue()函数把数据事件添加到之前介绍过的mtk_nanohub_dev->wp_queue结构体中,然后唤醒等待事件chre_kthread_wait。chre_kthread_wait等待事件在mtk_nanohub_direct_push_work()函数中,该函数为驱动mtk_nanohub创建的内核线程函数。

static int mtk_nanohub_direct_push_work(void *data)
{
	for (;;) {
		wait_event(chre_kthread_wait,
			READ_ONCE(chre_kthread_wait_condition));
		WRITE_ONCE(chre_kthread_wait_condition, false);
		mtk_nanohub_read_wp_queue();
	}
	return 0;
}

通过mtk_nanohub_read_wp_queue()函数把mtk_nanohub_dev->wp_queue中的数据读取出来,经过一个比较长的调用链,在mtk_nanohub_report_to_manager()函数中解析数据

/*mtk_nanohub_read_wp_queue() -> mtk_nanohub_server_dispatch_data() -> mtk_nanohub_report_data() -> mtk_nanohub_report_to_manager() */
static int mtk_nanohub_report_to_manager(struct data_unit_t *data)
{
	struct mtk_nanohub_device *device = mtk_nanohub_dev;
	struct hf_manager *manager = mtk_nanohub_dev->hf_dev.manager;
	struct hf_manager_event event;

	if (!manager)
		return 0;

	if (data->flush_action == DATA_ACTION) {
		switch (data->sensor_type) {
		...
		case ID_LIGHT:
			event.timestamp = data->time_stamp;
			event.sensor_type = id_to_type(data->sensor_type);
			event.action = data->flush_action;
			event.word[0] = data->light;
			break;
		case ID_PROXIMITY:
			event.timestamp = data->time_stamp;
			event.sensor_type = id_to_type(data->sensor_type);
			event.action = data->flush_action;
			event.word[0] = data->proximity_t.oneshot;
			event.word[1] = data->proximity_t.steps;
			break;
		...
	} else if (data->flush_action == FLUSH_ACTION) {
		...
	} else {
		...
	}
	...
	return manager->report(manager, &event);
}

最后调用mtk_nanohub_dev->hf_dev.manager成员函数report,即hf_manager_io_report()函数。在hf_manager_io_report()函数中把mtk_nanohub_dev->hf_dev.manager成员core(即之前提到过的全局结构体hfcore)和数据传递给到hf_manager_find_client()函数

static int hf_manager_find_client(struct hf_core *core,
		struct hf_manager_event *event)
{
	int err = 0;
	unsigned long flags;
	struct hf_client *client = NULL;

	spin_lock_irqsave(&core->client_lock, flags);
	list_for_each_entry(client, &core->client_list, list) {
		/* must (err |=), collect all err to decide retry */
		err |= hf_manager_distinguish_event(client, event);
	}
	spin_unlock_irqrestore(&core->client_lock, flags);

	return err;
}

从函数中可以看到通过hfcore结构体寻找hal层打开hf_manager节点时创建的struct hf_client结构体,也就是请求这些数据的client。最后在hf_manager_report_event()函数中把数据放到client->hf_fifo结构体中。

/*hf_manager_distinguish_event() -> hf_manager_report_event() */
static int hf_manager_report_event(struct hf_client *client,
		struct hf_manager_event *event)
{
	unsigned long flags;
	unsigned int next = 0;
	int64_t hang_time = 0;
	const int64_t max_hang_time = 1000000000LL;
	struct hf_client_fifo *hf_fifo = &client->hf_fifo;
	spin_lock_irqsave(&hf_fifo->buffer_lock, flags);
	...
	hf_fifo->buffer[hf_fifo->head++] = *event;//把数据存放在hf_fifo结构中,等待hal层读取
	hf_fifo->head &= hf_fifo->bufsize - 1;
	/* remain 1 count */
	next = hf_fifo->head + 1;
	next &= hf_fifo->bufsize - 1;
	...
	spin_unlock_irqrestore(&hf_fifo->buffer_lock, flags);
	wake_up_interruptible(&hf_fifo->wait);
	return 0;
}

现在来看hf_manager节点的文件操作函数read:hf_manager_read()

static ssize_t hf_manager_read(struct file *filp,
		char __user *buf, size_t count, loff_t *f_pos)
{
	struct hf_client *client = filp->private_data;
	struct hf_client_fifo *hf_fifo = &client->hf_fifo;
	struct hf_manager_event event;
	size_t read = 0;
	if (count != 0 && count < sizeof(struct hf_manager_event))
		return -EINVAL;

	for (;;) {
		if (hf_fifo->head == hf_fifo->tail)
			return 0;
		if (count == 0)
			break;
		while (read + sizeof(event) <= count &&
				fetch_next(hf_fifo, &event)) {

			if (copy_to_user(buf + read, &event, sizeof(event)))
				return -EFAULT;
			read += sizeof(event);
		}
		if (read)
			break;
	}
	return read;
}

可以看到上层通过read hf_manager节点,从已经存放了sensor数据的client->hf_fifo结构中读取数据。至此整个数据传递过程就分析完了。
fifo->head + 1;
next &= hf_fifo->bufsize - 1;

spin_unlock_irqrestore(&hf_fifo->buffer_lock, flags);
wake_up_interruptible(&hf_fifo->wait);
return 0;
}


现在来看hf_manager节点的文件操作函数read:hf_manager_read()

```c
static ssize_t hf_manager_read(struct file *filp,
		char __user *buf, size_t count, loff_t *f_pos)
{
	struct hf_client *client = filp->private_data;
	struct hf_client_fifo *hf_fifo = &client->hf_fifo;
	struct hf_manager_event event;
	size_t read = 0;
	if (count != 0 && count < sizeof(struct hf_manager_event))
		return -EINVAL;

	for (;;) {
		if (hf_fifo->head == hf_fifo->tail)
			return 0;
		if (count == 0)
			break;
		while (read + sizeof(event) <= count &&
				fetch_next(hf_fifo, &event)) {

			if (copy_to_user(buf + read, &event, sizeof(event)))
				return -EFAULT;
			read += sizeof(event);
		}
		if (read)
			break;
	}
	return read;
}

可以看到上层通过read hf_manager节点,从已经存放了sensor数据的client->hf_fifo结构中读取数据。至此整个数据传递过程就分析完了。

-----欢迎点赞,收藏和关注------文章来源地址https://www.toymoban.com/news/detail-400751.html

到了这里,关于【MTK Sensor2.0】SCP与AP数据流分析的文章就介绍完了。如果您还想了解更多内容,请在右上角搜索TOY模板网以前的文章或继续浏览下面的相关文章,希望大家以后多多支持TOY模板网!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处: 如若内容造成侵权/违法违规/事实不符,请点击违法举报进行投诉反馈,一经查实,立即删除!

领支付宝红包 赞助服务器费用

相关文章

  • nvidia drive agx orin nvsipl camera数据流 驱动层分析

    背景:nvidia driveos中关于camera,自己封装了一层nvsipl框架,在linux应用层,可以直接调用nvmedia 库,即可操作摄像头,对于配置这一块,也提供了json文件,xml文件来进行serdes的配置开发,如:使用了哪路i2c,serdes max96712 i2c设备地址是啥,camera sensor的i2c设备地址是啥等,然后在

    2024年02月11日
    浏览(48)
  • 软考:软件工程:软件开发方法,软件可行性分析,需求分析,ER实体图,数据流图,状态转换图,数据字典

    提示:系列被面试官问的问题,我自己当时不会,所以下来自己复盘一下,认真学习和总结,以应对未来更多的可能性 关于互联网大厂的笔试面试,都是需要细心准备的 (1)自己的科研经历, 科研内容 ,学习的相关领域知识,要熟悉熟透了 (2)自己的实习经历,做了 什

    2024年02月11日
    浏览(42)
  • 大数据流处理与实时分析:Spark Streaming和Flink Stream SQL的对比与选择

    作者:禅与计算机程序设计艺术

    2024年02月07日
    浏览(41)
  • 什么是Vue的数据流(单向数据流)?如何进行数据流管理

    在Vue中,数据流是指数据的传递和管理方式。Vue采用的是单向数据流,也就是说,数据是从父组件流向子组件,子组件不能直接修改父组件的数据。本文将介绍Vue的数据流机制,以及如何进行数据流管理。 Vue的数据流机制可以分为两类:props和events。 Props 在Vue中,父组件可以

    2024年02月08日
    浏览(60)
  • 银行储蓄系统的顶层数据流图及细化数据流图

    绘制出银行储蓄系统的顶层数据流图及细化数据流图; 银行储蓄系统存、取款流程如下: 1)业务员事先录入利率信息; 2)如果是存款,储户填写存款单,业务员将存款单键入系统,系统更新储户存款信息(存款人姓名、存款人账号、电话号码、身份证号码、存款金额、存

    2024年01月17日
    浏览(47)
  • Elasticsearch:将 ILM 管理的数据流迁移到数据流生命周期

    警告 :此功能处于技术预览阶段,可能会在未来版本中更改或删除。 Elastic 将努力解决任何问题,但技术预览版中的功能不受官方 GA 功能的支持 SLA 的约束。目前的最新版本为 8.12。 在本教程中,我们将了解如何将现有数据流(data stream)从索引生命周期管理 (ILM) 迁移到数据

    2024年04月29日
    浏览(43)
  • 数据流图(DFD)

    数据流图是用于表示系统逻辑模型的一种工具。从数据 传递和加工 的角度,以图形的方式描述数据在系统中流动和处理的过程 数据字典是指对数据的数据项、数据结构、数据流、数据存储、处理逻辑等进行定义和描述,其目的是 对数据流图中的各个元素做出详细的说明 ,

    2024年02月04日
    浏览(47)
  • Flink数据流

    官网介绍 Apache Flink 是一个框架和分布式处理引擎,用于对无界和有界数据流进行有状态计算。Flink 被设计为在所有常见的集群环境中运行,以内存中的速度和任何规模执行计算。 1.无限流有一个开始,但没有定义的结束。它们不会在生成数据时终止并提供数据。必须连续处

    2024年02月17日
    浏览(47)
  • postman 数据流请求

    备注: Postman version : Version 9.21.3 Windows 版本 1.修改headers 2.Body 部分 选择raw 格式数据 3.最后执行请求

    2024年02月11日
    浏览(61)
  • 指令流和数据流

    指令流和数据流 Flynn于1972年提出计算平台分类法主要根据指令流和数据流来分类,分为四类: ①单指令流单数据流机器(S1SD) SISD机器是一种传统的串行计算机,它的硬件不支持任何形式的并行计算,所有的指令都是串行执行。并且在某个时钟周期内,CPU只能处理一个数据流

    2024年02月04日
    浏览(53)

觉得文章有用就打赏一下文章作者

支付宝扫一扫打赏

博客赞助

微信扫一扫打赏

请作者喝杯咖啡吧~博客赞助

支付宝扫一扫领取红包,优惠每天领

二维码1

领取红包

二维码2

领红包