以Binder视角来看Service启动
一. 概述
在前面的文章startService流程分析,從系統framework層詳細介紹Service啟動流程,見下圖:
Service啟動過程中,首先在發起方進程調用startService,經過binder驅動,最終進入system_server進程的binder線程來執行ActivityManagerService模塊的代碼。本文將以Binder視角來深入講解其中地這一個過程:如何由AMP.startService 調用到 AMS.startService。
繼承關系
這里涉及AMP(ActivityManagerProxy)和AMS(ActivityManagerService),先來看看這兩者之間的關系。
從上圖,可知:
- AMS繼承于AMN(抽象類);
- AMN實現了IActivityManager接口,繼承于Binder對象(Binder服務端);
- AMP也實現IActivityManager接口;
- Binder對象實現了IBinder接口,IActivityManager繼承于IInterface。
二. 分析
2.1 AMP.startService
public ComponentName startService(IApplicationThread caller, Intent service,String resolvedType, String callingPackage, int userId) throws RemoteException {//【見小節2.1.1】Parcel data = Parcel.obtain();Parcel reply = Parcel.obtain();data.writeInterfaceToken(IActivityManager.descriptor);data.writeStrongBinder(caller != null ? caller.asBinder() : null);service.writeToParcel(data, 0);data.writeString(resolvedType);data.writeString(callingPackage);data.writeInt(userId);//通過Binder 傳遞數據 【見小節2.2】mRemote.transact(START_SERVICE_TRANSACTION, data, reply, 0);//讀取應答消息的異常情況reply.readException();//根據reply數據來創建ComponentName對象ComponentName res = ComponentName.readFromParcel(reply);//【見小節2.1.2】data.recycle();reply.recycle();return res; }創建兩個Parcel對象,data用于發送數據,reply用于接收應答數據。其中descriptor = “android.app.IActivityManager”;
- 將startService相關數據都封裝到Parcel對象data;通過mRemote發送到Binder驅動;
- Binder應答消息都封裝到reply對象,從reply解析出ComponentName.
2.1.1 Parcel.obtain
[-> Parcel.java]
public static Parcel obtain() {final Parcel[] pool = sOwnedPool;synchronized (pool) {Parcel p;//POOL_SIZE = 6for (int i=0; i<POOL_SIZE; i++) {p = pool[i];if (p != null) {pool[i] = null;return p;}}}//當緩存池沒有現成的Parcel對象,則直接創建return new Parcel(0); }sOwnedPool是一個大小為6,存放著parcel對象的緩存池. obtain()方法的作用:
這樣設計的目標是用于節省每次都創建Parcel對象的開銷。
2.1.2 Parcel.recycle
public final void recycle() {//釋放native parcel對象freeBuffer();final Parcel[] pool;//根據情況來選擇加入相應池if (mOwnsNativeParcelObject) {pool = sOwnedPool;} else {mNativePtr = 0;pool = sHolderPool;}synchronized (pool) {for (int i=0; i<POOL_SIZE; i++) {if (pool[i] == null) {pool[i] = this;return;}}} }將不再使用的Parcel對象放入緩存池,可回收重復利用,當緩存池已滿則不再加入緩存池。
mOwnsNativeParcelObject變量來決定是將Parcel對象存放到sOwnedPool,還是sHolderPool池。該變量值取決于Parcel初始化init()過程是否存在native指針。
private void init(long nativePtr) {if (nativePtr != 0) {//native指針不為0,則采用sOwnedPoolmNativePtr = nativePtr;mOwnsNativeParcelObject = false;} else {//否則,采用sHolderPoolmNativePtr = nativeCreate();mOwnsNativeParcelObject = true;} }recycle()操作用于向池中添加parcel對象,obtain()則是從池中取對象的操作。
2.2 mRemote.transact
2.2.1 mRemote
mRemote是在AMP對象創建的時候由構造函數賦值的,而AMP的創建是由ActivityManagerNative.getDefault()來獲取的,核心實現是由如下代碼:
static public IActivityManager getDefault() {return gDefault.get(); }gDefault為Singleton類型對象,此次采用單例模式.
public abstract class Singleton<T> {public final T get() {synchronized (this) {if (mInstance == null) {//首次調用create()來獲取AMP對象mInstance = create();}return mInstance;}} }get()方法獲取的便是mInstance,再來看看create()的過程:
private static final Singleton<IActivityManager> gDefault = new Singleton<IActivityManager>() {protected IActivityManager create() {//獲取名為"activity"的服務IBinder b = ServiceManager.getService("activity");//創建AMP對象IActivityManager am = asInterface(b);return am;} };看過文章Binder系列7—framework層分析,可知ServiceManager.getService(“activity”)返回的是指向目標服務AMS的代理對象BinderProxy對象,由該代理對象可以找到目標服務AMS所在進程,這個過程就不再重復了。接下來,再來看看asInterface的功能:
public abstract class ActivityManagerNative extends Binder implements IActivityManager {static public IActivityManager asInterface(IBinder obj) {if (obj == null) {return null;}IActivityManager in = (IActivityManager)obj.queryLocalInterface(descriptor);if (in != null) { //此處為nullreturn in;}// 此處調用AMP的構造函數,obj為BinderProxy對象(記錄遠程AMS的handle)return new ActivityManagerProxy(obj);}public ActivityManagerNative() {//調用父類binder對象的方法,保存attachInterface(this, descriptor);}... }接下來,進入AMP的構造方法:
class ActivityManagerProxy implements IActivityManager {public ActivityManagerProxy(IBinder remote){mRemote = remote;} }到此,可知mRemote便是指向AMS服務的BinderProxy對象。
2.2.2 mRemote.transact
mRemote.transact(START_SERVICE_TRANSACTION, data, reply, 0);其中data保存了descriptor,caller, intent, resolvedType, callingPackage, userId這6項信息。
final class BinderProxy implements IBinder {public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {//用于檢測Parcel大小是否大于800kBinder.checkParcel(this, code, data, "Unreasonably large binder buffer");//【見2.3】return transactNative(code, data, reply, flags);} }transactNative這是native方法,經過jni調用android_os_BinderProxy_transact方法。
2.3 android_os_BinderProxy_transact
[-> android_util_Binder.cpp]
static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,jint code, jobject dataObj, jobject replyObj, jint flags) {if (dataObj == NULL) {jniThrowNullPointerException(env, NULL);return JNI_FALSE;}...//將java Parcel轉為native ParcelParcel* data = parcelForJavaObject(env, dataObj);Parcel* reply = parcelForJavaObject(env, replyObj);//gBinderProxyOffsets.mObject中保存的是new BpBinder(handle)對象IBinder* target = (IBinder*) env->GetLongField(obj, gBinderProxyOffsets.mObject);if (target == NULL) {jniThrowException(env, "java/lang/IllegalStateException", "Binder has been finalized!");return JNI_FALSE;}...if (kEnableBinderSample){time_binder_calls = should_time_binder_calls();if (time_binder_calls) {start_millis = uptimeMillis();}}//此處便是BpBinder::transact()【見小節2.4】status_t err = target->transact(code, *data, reply, flags);if (kEnableBinderSample) {if (time_binder_calls) {conditionally_log_binder_call(start_millis, target, code);}}if (err == NO_ERROR) {return JNI_TRUE;} else if (err == UNKNOWN_TRANSACTION) {return JNI_FALSE;}//最后根據transact執行具體情況,拋出相應的ExceptionsignalExceptionForError(env, obj, err, true , data->dataSize());return JNI_FALSE; }kEnableBinderSample這是調試開關,用于打開調試主線程執行一次transact所花時長的統計。接下來進入native層BpBinder
這里會有異常拋出:
- NullPointerException:當dataObj對象為空,則拋該異常;
- IllegalStateException:當BpBinder對象為空,則拋該異常
- signalExceptionForError(): 根據transact執行具體情況,拋出相應的異常。
2.4 BpBinder.transact
[-> BpBinder.cpp]
status_t BpBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) {if (mAlive) {// 【見小節2.5】status_t status = IPCThreadState::self()->transact(mHandle, code, data, reply, flags);if (status == DEAD_OBJECT) mAlive = 0;return status;}return DEAD_OBJECT; }IPCThreadState::self()采用單例模式,保證每個線程只有一個實例對象。
2.5 IPC.transact
[-> IPCThreadState.cpp]
status_t IPCThreadState::transact(int32_t handle,uint32_t code, const Parcel& data,Parcel* reply, uint32_t flags) {status_t err = data.errorCheck(); //數據錯誤檢查flags |= TF_ACCEPT_FDS;....if (err == NO_ERROR) {// 傳輸數據 【見小節2.6】err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);}if (err != NO_ERROR) {if (reply) reply->setError(err);return (mLastError = err);}if ((flags & TF_ONE_WAY) == 0) {if (reply) {//進入等待響應 【見小節2.7】err = waitForResponse(reply);}...}...return err; }transact主要過程:
- 先執行writeTransactionData()已向mOut寫入數據,此時mIn還沒有數據;
- 然后執行waitForResponse()方法,循環執行,直到收到應答消息:
- talkWithDriver()跟驅動交互,收到應答消息,便會寫入mIn;
- 當mIn存在數據,則根據不同的響應嗎,執行相應的操作。
mOut和mIn都是parcel對象。
2.6 IPC.writeTransactionData
[-> IPCThreadState.cpp]
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) {binder_transaction_data tr;tr.target.ptr = 0;tr.target.handle = handle; // handle指向AMStr.code = code; // START_SERVICE_TRANSACTIONtr.flags = binderFlags; // 0tr.cookie = 0;tr.sender_pid = 0;tr.sender_euid = 0;const status_t err = data.errorCheck();if (err == NO_ERROR) {// data為startService相關信息tr.data_size = data.ipcDataSize(); // mDataSizetr.data.ptr.buffer = data.ipcData(); // mData指針tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t); //mObjectsSizetr.data.ptr.offsets = data.ipcObjects(); //mObjects指針}...mOut.writeInt32(cmd); //cmd = BC_TRANSACTIONmOut.write(&tr, sizeof(tr)); //寫入binder_transaction_data數據return NO_ERROR; }2.7 IPC.waitForResponse
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) {int32_t cmd;int32_t err;while (1) {if ((err=talkWithDriver()) < NO_ERROR) break; // 【見小節2.8】err = mIn.errorCheck();//當存在error則退出循環,最終將error返回給transact過程if (err < NO_ERROR) break;//當mDataSize > mDataPos則代表有可用數據,往下執行if (mIn.dataAvail() == 0) continue;cmd = mIn.readInt32();switch (cmd) {case BR_TRANSACTION_COMPLETE:if (!reply && !acquireResult) goto finish;break;...default:err = executeCommand(cmd); //【見小節2.9】if (err != NO_ERROR) goto finish;break;}}finish:if (err != NO_ERROR) {if (reply) reply->setError(err);}return err; }這里有了真正跟binder driver大交道的地方,那就是talkWithDriver.
2.8 IPC.talkWithDriver
此時mOut有數據,mIn還沒有數據。doReceive默認值為true
status_t IPCThreadState::talkWithDriver(bool doReceive) {binder_write_read bwr;const bool needRead = mIn.dataPosition() >= mIn.dataSize();const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;bwr.write_size = outAvail;bwr.write_buffer = (uintptr_t)mOut.data();if (doReceive && needRead) {//接收數據緩沖區信息的填充。當收到驅動的數據,則寫入mInbwr.read_size = mIn.dataCapacity();bwr.read_buffer = (uintptr_t)mIn.data();} else {bwr.read_size = 0;bwr.read_buffer = 0;}// 當同時沒有輸入和輸出數據則直接返回if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;bwr.write_consumed = 0;bwr.read_consumed = 0;status_t err;do {//ioctl不停的讀寫操作,經過syscall,進入Binder驅動。調用Binder_ioctl【小節3.1】if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)err = NO_ERROR;elseerr = -errno;...} while (err == -EINTR);if (err >= NO_ERROR) {if (bwr.write_consumed > 0) {if (bwr.write_consumed < mOut.dataSize())mOut.remove(0, bwr.write_consumed);elsemOut.setDataSize(0);}if (bwr.read_consumed > 0) {mIn.setDataSize(bwr.read_consumed);mIn.setDataPosition(0);}return NO_ERROR;}return err; }binder_write_read結構體用來與Binder設備交換數據的結構, 通過ioctl與mDriverFD通信,是真正與Binder驅動進行數據讀寫交互的過程。
2.9 IPC.executeCommand
add Service
…
三、Binder driver
3.1 Binder_ioctl
由【小節2.8】傳遞過出來的參數cmd=BINDER_WRITE_READ
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) {int ret;struct binder_proc *proc = filp->private_data;struct binder_thread *thread;//當binder_stop_on_user_error>=2,則該線程加入等待隊列,進入休眠狀態ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);...binder_lock(__func__);// 從binder_proc中查找binder_thread,如果當前線程已經加入到proc的線程隊列則直接返回,// 如果不存在則創建binder_thread,并將當前線程添加到當前的procthread = binder_get_thread(proc);if (thread == NULL) {ret = -ENOMEM;goto err;}switch (cmd) {case BINDER_WRITE_READ://【見小節3.2】ret = binder_ioctl_write_read(filp, cmd, arg, thread);if (ret)goto err;break;...}default:ret = -EINVAL;goto err;}ret = 0; err:if (thread)thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;binder_unlock(__func__);//當binder_stop_on_user_error>=2,則該線程加入等待隊列,進入休眠狀態wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);return ret; }- 當返回值為-ENOMEM,則意味著內存不足,無法創建binder_thread對象。
- 當返回值為-EINVAL,則意味著CMD命令參數無效;
3.2 binder_ioctl_write_read
此時arg是一個binder_write_read結構體,mOut數據保存在write_buffer,所以write_size>0,但此時read_size=0。
static int binder_ioctl_write_read(struct file *filp,unsigned int cmd, unsigned long arg,struct binder_thread *thread) {int ret = 0;struct binder_proc *proc = filp->private_data;unsigned int size = _IOC_SIZE(cmd);void __user *ubuf = (void __user *)arg;struct binder_write_read bwr;if (size != sizeof(struct binder_write_read)) {ret = -EINVAL;goto out;}//將用戶空間bwr結構體拷貝到內核空間if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {ret = -EFAULT;goto out;}if (bwr.write_size > 0) {//【見小節3.3】ret = binder_thread_write(proc, thread,bwr.write_buffer,bwr.write_size,&bwr.write_consumed);//當執行失敗,則直接將內核bwr結構體寫回用戶空間,并跳出該方法if (ret < 0) {bwr.read_consumed = 0;if (copy_to_user(ubuf, &bwr, sizeof(bwr)))ret = -EFAULT;goto out;}}if (bwr.read_size > 0) {...}if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {ret = -EFAULT;goto out;} out:return ret; }3.3 binder_thread_write
static int binder_thread_write(struct binder_proc *proc,struct binder_thread *thread,binder_uintptr_t binder_buffer, size_t size,binder_size_t *consumed) {uint32_t cmd;void __user *buffer = (void __user *)(uintptr_t)binder_buffer;void __user *ptr = buffer + *consumed;void __user *end = buffer + size;while (ptr < end && thread->return_error == BR_OK) {//拷貝用戶空間的cmd命令,此時為BC_TRANSACTIONif (get_user(cmd, (uint32_t __user *)ptr))return -EFAULT;ptr += sizeof(uint32_t);switch (cmd) {case BC_TRANSACTION:{struct binder_transaction_data tr;//拷貝用戶空間的binder_transaction_dataif (copy_from_user(&tr, ptr, sizeof(tr)))return -EFAULT;ptr += sizeof(tr);//【見小節3.4】binder_transaction(proc, thread, &tr, cmd == BC_REPLY);break;}...}*consumed = ptr - buffer;}return 0; }不斷從binder_buffer所指向的地址,獲取并處理相應的binder_transaction_data。
3.4 binder_transaction
發送的是BC_TRANSACTION時,此時reply=0。
static void binder_transaction(struct binder_proc *proc,struct binder_thread *thread,struct binder_transaction_data *tr, int reply){struct binder_transaction *t;struct binder_work *tcomplete;binder_size_t *offp, *off_end;binder_size_t off_min;struct binder_proc *target_proc;struct binder_thread *target_thread = NULL;struct binder_node *target_node = NULL;struct list_head *target_list;wait_queue_head_t *target_wait;struct binder_transaction *in_reply_to = NULL;if (reply) {...}else {//查詢目標進程的過程: handle -> binder_ref -> binder_node -> binder_procif (tr->target.handle) {struct binder_ref *ref;ref = binder_get_ref(proc, tr->target.handle);target_node = ref->node;}target_proc = target_node->proc;...}if (target_thread) {e->to_thread = target_thread->pid;target_list = &target_thread->todo;target_wait = &target_thread->wait;} else {//首次執行target_thread為空target_list = &target_proc->todo;target_wait = &target_proc->wait;}t = kzalloc(sizeof(*t), GFP_KERNEL);tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);//非oneway的通信方式,把當前thread保存到transaction的from字段if (!reply && !(tr->flags & TF_ONE_WAY))t->from = thread;elset->from = NULL;t->sender_euid = task_euid(proc->tsk);t->to_proc = target_proc; //目標進程為system_servert->to_thread = target_thread;t->code = tr->code; //code = START_SERVICE_TRANSACTIONt->flags = tr->flags; // flags = 0t->priority = task_nice(current);//從目標進程中分配內存空間t->buffer = binder_alloc_buf(target_proc, tr->data_size,tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));t->buffer->allow_user_free = 0;t->buffer->transaction = t;t->buffer->target_node = target_node;if (target_node)binder_inc_node(target_node, 1, 0, NULL);offp = (binder_size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));//分別拷貝用戶空間的binder_transaction_data中ptr.buffer和ptr.offsets到內核copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)tr->data.ptr.buffer, tr->data_size);copy_from_user(offp, (const void __user *)(uintptr_t)tr->data.ptr.offsets, tr->offsets_size);off_end = (void *)offp + tr->offsets_size;for (; offp < off_end; offp++) {struct flat_binder_object *fp;fp = (struct flat_binder_object *)(t->buffer->data + *offp);off_min = *offp + sizeof(struct flat_binder_object);switch (fp->type) {...case BINDER_TYPE_HANDLE:case BINDER_TYPE_WEAK_HANDLE: {struct binder_ref *ref = binder_get_ref(proc, fp->handle);if (ref->node->proc == target_proc) {if (fp->type == BINDER_TYPE_HANDLE)fp->type = BINDER_TYPE_BINDER;elsefp->type = BINDER_TYPE_WEAK_BINDER;fp->binder = ref->node->ptr;fp->cookie = ref->node->cookie;binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL);} else {struct binder_ref *new_ref;new_ref = binder_get_ref_for_node(target_proc, ref->node);fp->handle = new_ref->desc;binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);trace_binder_transaction_ref_to_ref(t, ref, new_ref);}} break;...default:return_error = BR_FAILED_REPLY;goto err_bad_object_type;}}if (reply) {binder_pop_transaction(target_thread, in_reply_to);} else if (!(t->flags & TF_ONE_WAY)) {t->need_reply = 1;t->from_parent = thread->transaction_stack;thread->transaction_stack = t;} else {if (target_node->has_async_transaction) {target_list = &target_node->async_todo;target_wait = NULL;} elsetarget_node->has_async_transaction = 1;}t->work.type = BINDER_WORK_TRANSACTION;list_add_tail(&t->work.entry, target_list);tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;list_add_tail(&tcomplete->entry, &thread->todo);if (target_wait)wake_up_interruptible(target_wait);return; }未完待續。。。
原文地址: http://gityuan.com/2016/09/04/binder-start-service/
總結
以上是生活随笔為你收集整理的以Binder视角来看Service启动的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Binder子系统之调试分析(三)
- 下一篇: C语言结构体里的成员数组和指针