当前位置: 代码网 > it编程>App开发>Android > Android中的Choreographer工作原理解析

Android中的Choreographer工作原理解析

2024年10月21日 Android 我要评论
前言android应用的ui界面需要多个角色共同协作才能完成渲染上屏,最终呈现到屏幕上被用户看到。其中包括app进程的测量、布局和绘制,surfaceflinger进程的渲染数据合成等。而app进程何

前言

android应用的ui界面需要多个角色共同协作才能完成渲染上屏,最终呈现到屏幕上被用户看到。其中包括app进程的测量、布局和绘制,surfaceflinger进程的渲染数据合成等。而app进程何时开始测量、布局和绘制,surfaceflinger进程何时开始数据的合成和上屏,决定了android系统能否有条不紊地刷新屏幕内容,给用户提供流畅的使用体验。

为了协调app进程的视图数据生产和surfaceflinger进程的视图数据消费处理,android系统引入choreographerapp进程的布局绘制工作和surfaceflinger进程的数据合成工作进行调度,减少因为android应用绘制渲染和屏幕刷新之间不同步导致的屏幕撕裂(tearing)问题。

本文从choreographer的创建流程和choreographervsync请求、分发以及处理两个方面来分析choreographer机制的工作原理。

说明:本文中的源码对应的android版本是android 13。

choreographer的创建流程

先给出choreographer的创建时机:android中启动一个activity时,在systemserver进程在activitytasksupervisor#realstartactivitylocked方法中通过binder跨进程调用到app进程,在app进程调用了iapplicationthread#scheduletransaction,最终执行完activity#onresume方法之后会调用viewmanager#addview,最终创建了viewrootimpl实例,并在viewrootimpl的构造函数中完成了choreographer实例创建,并作为viewrootimpl的成员变量持有。

systemserver进程收到app进程启动activitybinder请求后,在activitytasksupervisor#realstartactivitylocked方法中封装launchactivityitemresumeactivityitem并通过clientlifecyclemanager#scheduletransaction调度执行任务,最终还是通过app进程的iapplicationthreadbinder实例)跨进程调用到app进程的applicationthread#scheduletransaction方法。

// com.android.server.wm.activitytasksupervisor#realstartactivitylocked
	boolean realstartactivitylocked(activityrecord r, windowprocesscontroller proc, boolean andresume, boolean checkconfig) throws remoteexception {       
			// ...
			// create activity launch transaction.
           final clienttransaction clienttransaction = clienttransaction.obtain(proc.getthread(), r.token);
           // ...
           clienttransaction.addcallback(launchactivityitem.obtain(new intent(r.intent), system.identityhashcode(r), r.info, mergedconfiguration.getglobalconfiguration(), mergedconfiguration.getoverrideconfiguration(), r.compat, r.getfilteredreferrer(r.launchedfrompackage), task.voiceinteractor, proc.getreportedprocstate(), r.getsavedstate(), r.getpersistentsavedstate(), results, newintents, r.takeoptions(), istransitionforward, proc.createprofilerinfoifneeded(), r.assisttoken, activityclientcontroller, r.shareableactivitytoken, r.getlaunchedfrombubble(), fragmenttoken));
           // set desired final state.
           final activitylifecycleitem lifecycleitem;
           if (andresume) {
               lifecycleitem = resumeactivityitem.obtain(istransitionforward);
           } else {
               lifecycleitem = pauseactivityitem.obtain();
           }
           clienttransaction.setlifecyclestaterequest(lifecycleitem);
           // schedule transaction.
           mservice.getlifecyclemanager().scheduletransaction(clienttransaction);
           // ...
	}
	// android.app.clienttransactionhandler#scheduletransaction
    void scheduletransaction(clienttransaction transaction) throws remoteexception {
        final iapplicationthread client = transaction.getclient();
        transaction.schedule();
        if (!(client instanceof binder)) {
            // if client is not an instance of binder - it's a remote call and at this point it is
            // safe to recycle the object. all objects used for local calls will be recycled after
            // the transaction is executed on client in activitythread.
            transaction.recycle();
        }
    }
	// android.app.servertransaction.clienttransaction#schedule
	/**
     * 当transaction初始化之后开始调度执行,将会被发送给client端并按照以下顺序进行处理:
     * 1. 调用preexecute(clienttransactionhandler)
     * 2. 调度transaction对应的消息
     * 3. 调用transactionexecutor#execute(clienttransaction)
     */
    public void schedule() throws remoteexception {
        mclient.scheduletransaction(this); // mclient为app进程通过binder通信传递的binder句柄iapplicationthread。
    }

此时调用链已经传递到app进程,由activitythreadapplicationthread类型的成员变量进行处理,而applicationthread#scheduletransaction方法直接调用了外部类activitythreadscheduletransaction方法,相当于直接转发给activitythread#scheduletransaction。而activitythread#scheduletransaction通过主线程的handler对象mh向主线程的messagequeue中插入了一个execute_transaction消息。此时,app进程在主线程中处理execute_transaction消息。

/**
 * 负责管理应用进程中的主线程任务,按照systemserver进程请求的那样对任务进行调度和执行。
 */
public final class activitythread extends clienttransactionhandler
        implements activitythreadinternal {
	@unsupportedappusage
    final applicationthread mappthread = new applicationthread();
    @unsupportedappusage
    final looper mlooper = looper.mylooper();
    @unsupportedappusage
    final h mh = new h();
    // an executor that performs multi-step transactions.
    private final transactionexecutor mtransactionexecutor = new transactionexecutor(this);
    // ...
	private class applicationthread extends iapplicationthread.stub {
		// ...
		@override
        public void scheduletransaction(clienttransaction transaction) throws remoteexception {
            activitythread.this.scheduletransaction(transaction);
        }
        // ...
	}
	// from the class clienttransactionhandler that is the base class of activitythread
	/** prepare and schedule transaction for execution. */
    void scheduletransaction(clienttransaction transaction) {
        transaction.preexecute(this);
        sendmessage(activitythread.h.execute_transaction, transaction);
    }
	private void sendmessage(int what, object obj, int arg1, int arg2, boolean async) {
        if (debug_messages) {
            slog.v(tag,
                    "schedule " + what + " " + mh.codetostring(what) + ": " + arg1 + " / " + obj);
        }
        message msg = message.obtain();
        msg.what = what;
        msg.obj = obj;
        msg.arg1 = arg1;
        msg.arg2 = arg2;
        if (async) {
            msg.setasynchronous(true);
        }
        // 主线程的handler
        mh.sendmessage(msg);
    }
	class h extends handler {
		public void handlemessage(message msg) {
			// ...
			switch (msg.what) {
				// ...
				case execute_transaction:
                    final clienttransaction transaction = (clienttransaction) msg.obj;
                    mtransactionexecutor.execute(transaction);
                    if (issystem()) {
                        // client transactions inside system process are recycled on the client side
                        // instead of clientlifecyclemanager to avoid being cleared before this
                        // message is handled.
                        transaction.recycle();
                    }
                    // todo(lifecycler): recycle locally scheduled transactions.
                    break;
                // ...
            }
			// ...
		}
	}

主线程调用transactionexecutor#execute方法处理消息,并在executecallbacks方法中执行了resumeactivityitem

/**
 * class that manages transaction execution in the correct order.
 */
public class transactionexecutor {
	// ...
	/**
     * resolve transaction.
     * first all callbacks will be executed in the order they appear in the list. if a callback
     * requires a certain pre- or post-execution state, the client will be transitioned accordingly.
     * then the client will cycle to the final lifecycle state if provided. otherwise, it will
     * either remain in the initial state, or last state needed by a callback.
     */
    public void execute(clienttransaction transaction) {
        // ...
		// 实际执行生命周期任务的地方
        executecallbacks(transaction);
        executelifecyclestate(transaction);
        mpendingactions.clear();
        // ...
    }
	/** cycle through all states requested by callbacks and execute them at proper times. */
    @visiblefortesting
    public void executecallbacks(clienttransaction transaction) {
    	// 取出之前systemserver进程放入的launchactivityitem和resumeactivityitem
        final list<clienttransactionitem> callbacks = transaction.getcallbacks();
        if (callbacks == null || callbacks.isempty()) {
            // no callbacks to execute, return early.
            return;
        }
        if (debug_resolver) slog.d(tag, tid(transaction) + "resolving callbacks in transaction");
        final ibinder token = transaction.getactivitytoken();
        activityclientrecord r = mtransactionhandler.getactivityclient(token);
        // in case when post-execution state of the last callback matches the final state requested
        // for the activity in this transaction, we won't do the last transition here and do it when
        // moving to final state instead (because it may contain additional parameters from server).
        final activitylifecycleitem finalstaterequest = transaction.getlifecyclestaterequest();
        final int finalstate = finalstaterequest != null ? finalstaterequest.gettargetstate()
                : undefined;
        // index of the last callback that requests some post-execution state.
        final int lastcallbackrequestingstate = lastcallbackrequestingstate(transaction);
        final int size = callbacks.size();
        for (int i = 0; i < size; ++i) {
            final clienttransactionitem item = callbacks.get(i);
            if (debug_resolver) slog.d(tag, tid(transaction) + "resolving callback: " + item);
            final int postexecutionstate = item.getpostexecutionstate();
            final int closestpreexecutionstate = mhelper.getclosestpreexecutionstate(r,
                    item.getpostexecutionstate());
            if (closestpreexecutionstate != undefined) {
                cycletopath(r, closestpreexecutionstate, transaction);
            }
			// 执行launchactivityitem和resumeactivityitem的execute方法
            item.execute(mtransactionhandler, token, mpendingactions);
            item.postexecute(mtransactionhandler, token, mpendingactions);
            if (r == null) {
                // launch activity request will create an activity record.
                r = mtransactionhandler.getactivityclient(token);
            }
            if (postexecutionstate != undefined && r != null) {
                // skip the very last transition and perform it by explicit state request instead.
                final boolean shouldexcludelasttransition =
                        i == lastcallbackrequestingstate && finalstate == postexecutionstate;
                cycletopath(r, postexecutionstate, shouldexcludelasttransition, transaction);
            }
        }
    }
	// ...
}

最终调用到了activitythread#handleresumeactivity方法,执行了resume,并调用的addview将decorview和windowmanagerimpl进行关联。

/**
	 * request to move an activity to resumed state.
	 * @hide
	 */
	public class resumeactivityitem extends activitylifecycleitem {
	    private static final string tag = "resumeactivityitem";
	    // ...
	    @override
	    public void execute(clienttransactionhandler client, activityclientrecord r,
	            pendingtransactionactions pendingactions) {
	        trace.tracebegin(trace_tag_activity_manager, "activityresume");
	        // client是android.app.activitythread
	        client.handleresumeactivity(r, true /* finalstaterequest */, misforward,
	                "resume_activity");
	        trace.traceend(trace_tag_activity_manager);
	    }
	    // ...
	}
	// android.app.activitythread#handleresumeactivity
	@override
    public void handleresumeactivity(activityclientrecord r, boolean finalstaterequest,
            boolean isforward, string reason) {
        // if we are getting ready to gc after going to the background, well
        // we are back active so skip it.
        unschedulegcidler();
        msomeactivitieschanged = true;
        // 执行activity的resume方法
        if (!performresumeactivity(r, finalstaterequest, reason)) {
            return;
        }
        // ...
        if (r.window == null && !a.mfinished && willbevisible) {
            r.window = r.activity.getwindow();
            view decor = r.window.getdecorview();
            decor.setvisibility(view.invisible);
            viewmanager wm = a.getwindowmanager();
            windowmanager.layoutparams l = r.window.getattributes();
            a.mdecor = decor;
            l.type = windowmanager.layoutparams.type_base_application;
            l.softinputmode |= forwardbit;
            if (r.mpreservewindow) {
                a.mwindowadded = true;
                r.mpreservewindow = false;
                // normally the viewroot sets up callbacks with the activity
                // in addview->viewrootimpl#setview. if we are instead reusing
                // the decor view we have to notify the view root that the
                // callbacks may have changed.
                viewrootimpl impl = decor.getviewrootimpl();
                if (impl != null) {
                    impl.notifychildrebuilt();
                }
            }
            if (a.mvisiblefromclient) {
                if (!a.mwindowadded) {
                    a.mwindowadded = true;
                    // 将decorview和windowmanagerimpl进行关联
                    wm.addview(decor, l);
                } else {
                    // the activity will get a callback for this {@link layoutparams} change
                    // earlier. however, at that time the decor will not be set (this is set
                    // in this method), so no action will be taken. this call ensures the
                    // callback occurs with the decor set.
                    a.onwindowattributeschanged(l);
                }
            }
            // if the window has already been added, but during resume
            // we started another activity, then don't yet make the
            // window visible.
        } else if (!willbevisible) {
            if (locallogv) slog.v(tag, "launch " + r + " mstartedactivity set");
            r.hidefornow = true;
        }
        // ...
        looper.myqueue().addidlehandler(new idler());
    }

addview方法中创建了viewrootimpl实例,并将viewrootimpl实例和decorview实例进行维护,最后调用setviewdecorviewviewrootimpl进行关联,后续由viewrootimpl作为桥梁来间接和decorview进行交互。

/**
 * provides low-level communication with the system window manager for
 * operations that are not associated with any particular context.
 *
 * this class is only used internally to implement global functions where
 * the caller already knows the display and relevant compatibility information
 * for the operation.  for most purposes, you should use {@link windowmanager} instead
 * since it is bound to a context.
 *
 * @see windowmanagerimpl
 * @hide
 */
public final class windowmanagerglobal {
	@unsupportedappusage
    private final arraylist<view> mviews = new arraylist<view>();
    @unsupportedappusage
    private final arraylist<viewrootimpl> mroots = new arraylist<viewrootimpl>();
    @unsupportedappusage
    private final arraylist<windowmanager.layoutparams> mparams = new arraylist<windowmanager.layoutparams>();
    public void addview(view view, viewgroup.layoutparams params,
            display display, window parentwindow, int userid) {
        // ...
        final windowmanager.layoutparams wparams = (windowmanager.layoutparams) params;
        viewrootimpl root;
        view panelparentview = null;
        synchronized (mlock) {
            // ...
            iwindowsession windowlesssession = null;
            // ...
			// 创建viewrootimpl实例
            if (windowlesssession == null) {
                root = new viewrootimpl(view.getcontext(), display);
            } else {
                root = new viewrootimpl(view.getcontext(), display, windowlesssession);
            }
            view.setlayoutparams(wparams);
			// 维护decorview、viewrootimpl实例
            mviews.add(view);
            mroots.add(root);
            mparams.add(wparams);
            // do this last because it fires off messages to start doing things
            try {
            	// 调用setview将decorview和viewrootimpl进行关联,后续由viewrootimpl作为桥梁来间接和decorview进行交互
                root.setview(view, wparams, panelparentview, userid);
            } catch (runtimeexception e) {
                // badtokenexception or invaliddisplayexception, clean up.
                if (index >= 0) {
                    removeviewlocked(index, true);
                }
                throw e;
            }
        }
    }
	// ...
}

最终,我们来到了viewrootimpl的构造函数,在viewrootimpl的构造函数中创建了choreographer实例,并作为viewrootimpl的成员变量持有了,这样viewrootimpl就作为桥梁在app应用层和android系统之间进行双向通信。接下来看下choreographer是如何请求vsync信号以及如何分发vsync信号,最终实现appui不断刷新到屏幕上。

public final class viewrootimpl implements viewparent,
        view.attachinfo.callbacks, threadedrenderer.drawcallbacks,
        attachedsurfacecontrol {
    private static final string tag = "viewrootimpl";
	public viewrootimpl(@uicontext context context, display display, iwindowsession session,
            boolean usesfchoreographer) {
        // ...
        // 创建choreographer实例
        mchoreographer = usesfchoreographer ? choreographer.getsfinstance() : choreographer.getinstance();
        // ...
    }
	// ...
}
/**
 * coordinates the timing of animations, input and drawing.
 * <p>
 * the choreographer receives timing pulses (such as vertical synchronization)
 * from the display subsystem then schedules work to occur as part of rendering
 * the next display frame.
 * </p><p>
 * applications typically interact with the choreographer indirectly using
 * higher level abstractions in the animation framework or the view hierarchy.
 * here are some examples of things you can do using the higher-level apis.
 * </p>
 * <ul>
 * <li>to post an animation to be processed on a regular time basis synchronized with
 * display frame rendering, use {@link android.animation.valueanimator#start}.</li>
 * <li>to post a {@link runnable} to be invoked once at the beginning of the next display
 * frame, use {@link view#postonanimation}.</li>
 * <li>to post a {@link runnable} to be invoked once at the beginning of the next display
 * frame after a delay, use {@link view#postonanimationdelayed}.</li>
 * <li>to post a call to {@link view#invalidate()} to occur once at the beginning of the
 * next display frame, use {@link view#postinvalidateonanimation()} or
 * {@link view#postinvalidateonanimation(int, int, int, int)}.</li>
 * <li>to ensure that the contents of a {@link view} scroll smoothly and are drawn in
 * sync with display frame rendering, do nothing.  this already happens automatically.
 * {@link view#ondraw} will be called at the appropriate time.</li>
 * </ul>
 * <p>
 * however, there are a few cases where you might want to use the functions of the
 * choreographer directly in your application.  here are some examples.
 * </p>
 * <ul>
 * <li>if your application does its rendering in a different thread, possibly using gl,
 * or does not use the animation framework or view hierarchy at all
 * and you want to ensure that it is appropriately synchronized with the display, then use
 * {@link choreographer#postframecallback}.</li>
 * <li>... and that's about it.</li>
 * </ul>
 * <p>
 * each {@link looper} thread has its own choreographer.  other threads can
 * post callbacks to run on the choreographer but they will run on the {@link looper}
 * to which the choreographer belongs.
 * </p>
 */
public final class choreographer {
	private static final string tag = "choreographer";
    // thread local storage for the choreographer.
    private static final threadlocal<choreographer> sthreadinstance =
            new threadlocal<choreographer>() {
        @override
        protected choreographer initialvalue() {
            looper looper = looper.mylooper();
            if (looper == null) {
                throw new illegalstateexception("the current thread must have a looper!");
            }
            choreographer choreographer = new choreographer(looper, vsync_source_app);
            if (looper == looper.getmainlooper()) {
                mmaininstance = choreographer;
            }
            return choreographer;
        }
    };
	private choreographer(looper looper, int vsyncsource) {
        mlooper = looper;
        mhandler = new framehandler(looper);
        mdisplayeventreceiver = use_vsync
                ? new framedisplayeventreceiver(looper, vsyncsource)
                : null;
        mlastframetimenanos = long.min_value;
        mframeintervalnanos = (long)(1000000000 / getrefreshrate());
        mcallbackqueues = new callbackqueue[callback_last + 1];
        for (int i = 0; i <= callback_last; i++) {
            mcallbackqueues[i] = new callbackqueue();
        }
        // b/68769804: for low fps experiments.
        setfpsdivisor(systemproperties.getint(threadedrenderer.debug_fps_divisor, 1));
    }
    /**
     * gets the choreographer for the calling thread.  must be called from
     * a thread that already has a {@link android.os.looper} associated with it.
     *
     * @return the choreographer for this thread.
     * @throws illegalstateexception if the thread does not have a looper.
     */
    public static choreographer getinstance() {
        return sthreadinstance.get();
    }
}

从上面的分析中,我们知道了choreographer的创建时机是在activity#onresume执行之后,可见android系统这么设计是出于activity是android应用中承载ui的容器,只有容器创建之后,才需要创建choreographer来调度vsync信号,最终开启一帧帧的界面渲染和刷新。

那么会不会在每启动一个activity之后都会创建一个choreographer实例呢?答案是不会的,因为从choreographer的构造过程可以知道,choreographer的创建是通过threadlocal实现的,所以choreographer是线程单例的,所以主线程只会创建一个choreographer实例。

那么是不是任何一个线程都可以创建choreographer实例呢?答案是只有创建了looper的线程才能创建choreographer实例,原因是choreographer会通过looper进行线程切换,至于为什么线程切换将会在下面进行分析回答。

vsync信号的调度分发流程

下面我们结合源码分析下,choreographer是如何调度vsync信号的,调度之后又是如何接收vsync信号的,接收到vsync信号之后又是怎么处理的。

首先,我们看下choreographer的构造函数做了哪些事情来实现vsync信号的调度分发。首先,基于主线程的looper创建了framehandler用于线程切换,保证是在主线程请求调度vsync信号以及在主线程处理接收到的vsync信号。接着,创建了framedisplayeventreceiver用于请求和接收vsync信号。最后,创建了callbackqueue类型的数组,用于接收业务层投递的各种类型的任务。

    private choreographer(looper looper, int vsyncsource) {
        mlooper = looper;
        // 负责线程切换
        mhandler = new framehandler(looper);
        // 负责请求和接收vsync信号
        mdisplayeventreceiver = use_vsync
                ? new framedisplayeventreceiver(looper, vsyncsource)
                : null;
        mlastframetimenanos = long.min_value;
        mframeintervalnanos = (long)(1000000000 / getrefreshrate());
		// 存储业务提交的任务,四种任务类型
        mcallbackqueues = new callbackqueue[callback_last + 1];
        for (int i = 0; i <= callback_last; i++) {
            mcallbackqueues[i] = new callbackqueue();
        }
        // b/68769804: for low fps experiments.
        setfpsdivisor(systemproperties.getint(threadedrenderer.debug_fps_divisor, 1));
    }
private final class callbackqueue {
	private callbackrecord mhead;
	// ...
}
// 链表结构
private static final class callbackrecord {
	public callbackrecord next;
	public long duetime;
	/** runnable or framecallback or vsynccallback object. */
	public object action;
	/** denotes the action type. */
	public object token;
	// ...
}

下面结合源码看下framedisplayeventreceiver的创建过程,可以看到framedisplayeventreceiver继承自displayeventreceiver,并在构造函数中调用了displayeventreceiver的构造函数。因此,我们继续跟下displayeventreceiver的构造函数。

private final class framedisplayeventreceiver extends displayeventreceiver implements runnable {
	private boolean mhavependingvsync;
	private long mtimestampnanos;
	private int mframe;
	private vsynceventdata mlastvsynceventdata = new vsynceventdata();
	// 直接调用displayeventreceiver的构造函数
	public framedisplayeventreceiver(looper looper, int vsyncsource) {
		super(looper, vsyncsource, 0);
	}
    @override
    public void onvsync(long timestampnanos, long physicaldisplayid, int frame, vsynceventdata vsynceventdata) {
        try {
            long now = system.nanotime();
            if (timestampnanos > now) {
                timestampnanos = now;
            }
            if (mhavependingvsync) {
                log.w(tag, "already have a pending vsync event.  there should only be "
                        + "one at a time.");
            } else {
                mhavependingvsync = true;
            }
            mtimestampnanos = timestampnanos;
            mframe = frame;
            mlastvsynceventdata = vsynceventdata;
			// 发送异步消息到主线程
            message msg = message.obtain(mhandler, this);
            msg.setasynchronous(true);
            mhandler.sendmessageattime(msg, timestampnanos / timeutils.nanos_per_ms);
        } finally {
            trace.traceend(trace.trace_tag_view);
        }
    }
	// 在主线程执行
    @override
    public void run() {
        mhavependingvsync = false;
        doframe(mtimestampnanos, mframe, mlastvsynceventdata);
    }
}

displayeventreceiver的构造函数中将主线程的messagequeue取出,之后调用了nativeinit方法并传递了主线程的messagequeue,并将自身作为参数也一起传入nativeinit方法。

/**
 * provides a low-level mechanism for an application to receive display events
 * such as vertical sync.
 *
 * the display event receive is not thread safe.  moreover, its methods must only
 * be called on the looper thread to which it is attached.
 *
 * @hide
 */
public abstract class displayeventreceiver {
    /**
     * creates a display event receiver.
     *
     * @param looper the looper to use when invoking callbacks.
     * @param vsyncsource the source of the vsync tick. must be on of the vsync_source_* values.
     * @param eventregistration which events to dispatch. must be a bitfield consist of the
     * event_registration_*_flag values.
     */
    public displayeventreceiver(looper looper, int vsyncsource, int eventregistration) {
        if (looper == null) {
            throw new illegalargumentexception("looper must not be null");
        }
        mmessagequeue = looper.getqueue();
        mreceiverptr = nativeinit(new weakreference<displayeventreceiver>(this), mmessagequeue,
                vsyncsource, eventregistration);
    }
	private static native long nativeinit(weakreference<displayeventreceiver> receiver,
            messagequeue messagequeue, int vsyncsource, int eventregistration);
}

nativeinit是一个native方法,具体逻辑是通过c++实现的,代码位于android_view_displayeventreceiver.cpp中。其中关键的部分是nativedisplayeventreceiver的创建以及调用initialize方法进行初始化。

// frameworks/base/core/jni/android_view_displayeventreceiver.cpp
	static jlong nativeinit(jnienv* env, jclass clazz, jobject receiverweak, jobject vsynceventdataweak, jobject messagequeueobj, jint vsyncsource, jint eventregistration, jlong layerhandle) {
		// 获取native层的messagequeue对象
    	sp<messagequeue> messagequeue = android_os_messagequeue_getmessagequeue(env, messagequeueobj);
	    if (messagequeue == null) {
	        jnithrowruntimeexception(env, "messagequeue is not initialized.");
	        return 0;
	    }
		// 创建native层的displayeventreceiver,即nativedisplayeventreceiver
    	sp<nativedisplayeventreceiver> receiver = new nativedisplayeventreceiver(env, receiverweak, vsynceventdataweak, messagequeue, vsyncsource, eventregistration, layerhandle);
    	// 调用initialize进行初始化
    	status_t status = receiver->initialize();
	    if (status) {
	        string8 message;
	        message.appendformat("failed to initialize display event receiver.  status=%d", status);
	        jnithrowruntimeexception(env, message.c_str());
	        return 0;
	    }
	    receiver->incstrong(gdisplayeventreceiverclassinfo.clazz); // retain a reference for the object
	    return reinterpret_cast<jlong>(receiver.get());
	}
	// 父类是displayeventdispatcher
	class nativedisplayeventreceiver : public displayeventdispatcher {
	public:
	    nativedisplayeventreceiver(jnienv* env, jobject receiverweak, jobject vsynceventdataweak, const sp<messagequeue>& messagequeue, jint vsyncsource, jint eventregistration, jlong layerhandle);
	    void dispose();
	protected:
	    virtual ~nativedisplayeventreceiver();
	private:
	    jobject mreceiverweakglobal;
	    jobject mvsynceventdataweakglobal;
	    sp<messagequeue> mmessagequeue;
	    void dispatchvsync(nsecs_t timestamp, physicaldisplayid displayid, uint32_t count, vsynceventdata vsynceventdata) override;
	    void dispatchhotplug(nsecs_t timestamp, physicaldisplayid displayid, bool connected) override;
	    void dispatchhotplugconnectionerror(nsecs_t timestamp, int errorcode) override;
	    void dispatchmodechanged(nsecs_t timestamp, physicaldisplayid displayid, int32_t modeid,
	                             nsecs_t renderperiod) override;
	    void dispatchframerateoverrides(nsecs_t timestamp, physicaldisplayid displayid,
	                                    std::vector<framerateoverride> overrides) override;
	    void dispatchnullevent(nsecs_t timestamp, physicaldisplayid displayid) override {}
	    void dispatchhdcplevelschanged(physicaldisplayid displayid, int connectedlevel,
	                                   int maxlevel) override;
	};
	nativedisplayeventreceiver::nativedisplayeventreceiver(jnienv* env, jobject receiverweak,
	                                                       jobject vsynceventdataweak,
	                                                       const sp<messagequeue>& messagequeue,
	                                                       jint vsyncsource, jint eventregistration,
	                                                       jlong layerhandle) 
	                                                       // 父类构造函数
	                                                       : displayeventdispatcher(
	                                                       messagequeue->getlooper(),
	                                                       static_cast<gui::isurfacecomposer::vsyncsource>(vsyncsource),
	                                                       static_cast<gui::isurfacecomposer::eventregistration>(eventregistration), 
	                                                       layerhandle != 0 ? sp<ibinder>::fromexisting(reinterpret_cast<ibinder*>(layerhandle)) : nullptr
	                                                       ),
	                                                       // java层的receiver
	                                                       mreceiverweakglobal(env->newglobalref(receiverweak)),
	                                                       mvsynceventdataweakglobal(env->newglobalref(vsynceventdataweak)),
	                                                       mmessagequeue(messagequeue) {
	    alogv("receiver %p ~ initializing display event receiver.", this);
	}

首先看下nativedisplayeventreceiver对象的创建,nativedisplayeventreceiver的父类是displayeventdispatcher。查看源码可以判断出displayeventdispatcher是用于分发vsynchotplug等信号的,而displayeventdispatcher内部会创建displayeventreceiver对象用于接收surfaceflinger进程发送过来的信号。

// frameworks/native/libs/gui/displayeventdispatcher.cpp
	displayeventdispatcher::displayeventdispatcher(const sp<looper>& looper,
	                                               gui::isurfacecomposer::vsyncsource vsyncsource,
	                                               eventregistrationflags eventregistration,
	                                               const sp<ibinder>& layerhandle)
	      : mlooper(looper),
	        mreceiver(vsyncsource, eventregistration, layerhandle), // displayeventreceiver
	        mwaitingforvsync(false),
	        mlastvsynccount(0),
	        mlastschedulevsynctime(0) {
	    alogv("dispatcher %p ~ initializing display event dispatcher.", this);
	}
	// frameworks/native/libs/gui/displayeventreceiver.cpp
	displayeventreceiver::displayeventreceiver(gui::isurfacecomposer::vsyncsource vsyncsource, eventregistrationflags eventregistration, const sp<ibinder>& layerhandle) {
		// 获取surfaceflinger的代理对象
	    sp<gui::isurfacecomposer> sf(composerserviceaidl::getcomposerservice());
	    if (sf != nullptr) {
	        meventconnection = nullptr;
	        // 创建一个与surfaceflinger进程中的eventtread-app线程的vsyncsource信号连接
	        binder::status status = sf->createdisplayeventconnection(vsyncsource, static_cast<gui::isurfacecomposer::eventregistration>(eventregistration.get()), layerhandle, &meventconnection);
	        if (status.isok() && meventconnection != nullptr) {
	        	// 创建成功之后,构造一个bittube对象,并将上面创建好的连接的读端描述拷贝过来,用于后续vsync信号的监听
	            mdatachannel = std::make_unique<gui::bittube>();
	            // 拷贝surfaceflinger进程中创建的scoket读端描述符
	            status = meventconnection->stealreceivechannel(mdatachannel.get());
	            if (!status.isok()) {
	                aloge("stealreceivechannel failed: %s", status.tostring8().c_str());
	                miniterror = std::make_optional<status_t>(status.transactionerror());
	                mdatachannel.reset();
	                meventconnection.clear();
	            }
	        } else {
	            aloge("displayeventconnection creation failed: status=%s", status.tostring8().c_str());
	        }
	    }
	}
	// frameworks/native/services/surfaceflinger/scheduler/eventthread.cpp
	sp<eventthreadconnection> eventthread::createeventconnection(eventregistrationflags eventregistration) const {
	    auto connection = sp<eventthreadconnection>::make(const_cast<eventthread*>(this), ipcthreadstate::self()->getcallinguid(), eventregistration);
	    if (flagmanager::getinstance().misc1()) {
	        const int policy = sched_fifo;
	        connection->setminschedulerpolicy(policy, sched_get_priority_min(policy));
	    }
	    return connection;
	}
	// 创建了bittube,内部创建了socket
	eventthreadconnection::eventthreadconnection(eventthread* eventthread, uid_t callinguid, eventregistrationflags eventregistration)
      : mowneruid(callinguid),
        meventregistration(eventregistration),
        meventthread(eventthread),
        mchannel(gui::bittube::defaultsize) {} 

从源码中可以看到,bittube是用于接收surfaceflinger进程发送过来的信号的,从bittube的类文件可以看出,bittube内部是通过socket来实现跨进程发送信号的。

// frameworks/native/libs/gui/bittube.cpp
	static const size_t default_socket_buffer_size = 4 * 1024;
	bittube::bittube(size_t bufsize) {
	    init(bufsize, bufsize);
	}
	void bittube::init(size_t rcvbuf, size_t sndbuf) {
	    int sockets[2];
	    if (socketpair(af_unix, sock_seqpacket, 0, sockets) == 0) {
	        size_t size = default_socket_buffer_size;
	        setsockopt(sockets[0], sol_socket, so_rcvbuf, &rcvbuf, sizeof(rcvbuf));
	        setsockopt(sockets[1], sol_socket, so_sndbuf, &sndbuf, sizeof(sndbuf));
	        // since we don't use the "return channel", we keep it small...
	        setsockopt(sockets[0], sol_socket, so_sndbuf, &size, sizeof(size));
	        setsockopt(sockets[1], sol_socket, so_rcvbuf, &size, sizeof(size));
	        fcntl(sockets[0], f_setfl, o_nonblock);
	        fcntl(sockets[1], f_setfl, o_nonblock);
	        mreceivefd.reset(sockets[0]);
	        msendfd.reset(sockets[1]);
	    } else {
	        mreceivefd.reset();
	        aloge("bittube: pipe creation failed (%s)", strerror(errno));
	    }
	}
	base::unique_fd bittube::movereceivefd() {
	    return std::move(mreceivefd);
	}

总结一下,choreographer的创建流程:

vsync信号的请求

对于刷新率为60hz的屏幕来说,一般是每16.67ms产生一个vsync信号,但是每个app进程不一定会每个vsync信号都会接收,而是根据上层业务的实际需要进行vsync信号的监听和接收。这样设计的好处是可以按需触发app进程的渲染流程,降低不必要的渲染所带来的功耗。

上层业务一般会通过validate或者requestlayout方法来发起一次绘制请求,最终这个绘制请求会被转换成callbackrecord放入对应类型的callbackqueue中。

public final class choreographer {
	// ...
    private void postcallbackdelayedinternal(int callbacktype,
            object action, object token, long delaymillis) {
        synchronized (mlock) {
            final long now = systemclock.uptimemillis();
            // 计算任务执行的具体时间戳
            final long duetime = now + delaymillis;
            mcallbackqueues[callbacktype].addcallbacklocked(duetime, action, token);
            if (duetime <= now) { // 如果不需要延迟执行的话,则立即请求调度vsync信号
                scheduleframelocked(now);
            } else { // 否则通过延迟消息来请求调度vsync信号
                message msg = mhandler.obtainmessage(msg_do_schedule_callback, action);
                msg.arg1 = callbacktype;
                msg.setasynchronous(true);
                mhandler.sendmessageattime(msg, duetime);
            }
        }
    }
	private void scheduleframelocked(long now) {
        if (!mframescheduled) {
            mframescheduled = true;
            if (use_vsync) {
                // 检查当前线程是否为主线程,如果是主线程,直接请求调度vsync信号,否则向主线程发送异步消息来请求调度vsync信号
                if (isrunningonlooperthreadlocked()) {
                    schedulevsynclocked();
                } else {
                    message msg = mhandler.obtainmessage(msg_do_schedule_vsync);
                    msg.setasynchronous(true);
                    mhandler.sendmessageatfrontofqueue(msg);
                }
            } else {
                final long nextframetime = math.max(
                        mlastframetimenanos / timeutils.nanos_per_ms + sframedelay, now);
                if (debug_frames) {
                    log.d(tag, "scheduling next frame in " + (nextframetime - now) + " ms.");
                }
                message msg = mhandler.obtainmessage(msg_do_frame);
                msg.setasynchronous(true);
                mhandler.sendmessageattime(msg, nextframetime);
            }
        }
    }
	@unsupportedappusage(maxtargetsdk = build.version_codes.r, trackingbug = 170729553)
    private void schedulevsynclocked() {
        try {
            trace.tracebegin(trace.trace_tag_view, "choreographer#schedulevsynclocked");
            // 通过调用framedisplayeventreceiver的schedulevsync方法请求vsync信号
            mdisplayeventreceiver.schedulevsync();
        } finally {
            trace.traceend(trace.trace_tag_view);
        }
    }
	// ...
}

最终调用了native方法向surfaceflinger进程进行通信,请求分发vsync信号到app进程。之前创建choreographer的过程中,创建了displayeventreceiver并通过jni调用到了native层,在native层创建了nativedisplayeventreceiver对象之后,将其返回了java层,并保存在了java层的displayeventreceiver对象中。java层通过jni调用到native层并将之前创建的nativedisplayeventreceiver指针传回了native层,这样就可以在native层找到之前创建好的nativedisplayeventreceiver对象,并调用其schedulevsync方法,最终通过meventconnection完成跨进程请求,

// android.view.displayeventreceiver#schedulevsync
    @unsupportedappusage
    public void schedulevsync() {
        if (mreceiverptr == 0) {
            log.w(tag, "attempted to schedule a vertical sync pulse but the display event "
                    + "receiver has already been disposed.");
        } else {
            nativeschedulevsync(mreceiverptr);
        }
    }
	// frameworks/base/core/jni/android_view_displayeventreceiver.cpp
	static void nativeschedulevsync(jnienv* env, jclass clazz, jlong receiverptr) {
	    sp<nativedisplayeventreceiver> receiver =
	            reinterpret_cast<nativedisplayeventreceiver*>(receiverptr);
	    status_t status = receiver->schedulevsync();
	    if (status) {
	        string8 message;
	        message.appendformat("failed to schedule next vertical sync pulse.  status=%d", status);
	        jnithrowruntimeexception(env, message.c_str());
	    }
	}
	// frameworks/native/libs/gui/displayeventdispatcher.cpp
	status_t displayeventdispatcher::schedulevsync() {
	    if (!mwaitingforvsync) {
	        alogv("dispatcher %p ~ scheduling vsync.", this);
	        // drain all pending events.
	        nsecs_t vsynctimestamp;
	        physicaldisplayid vsyncdisplayid;
	        uint32_t vsynccount;
	        vsynceventdata vsynceventdata;
	        if (processpendingevents(&vsynctimestamp, &vsyncdisplayid, &vsynccount, &vsynceventdata)) {
	            aloge("dispatcher %p ~ last event processed while scheduling was for %" prid64 "", this,
	                  ns2ms(static_cast<nsecs_t>(vsynctimestamp)));
	        }
			// 请求下一个vsync信号
	        status_t status = mreceiver.requestnextvsync();
	        if (status) {
	            alogw("failed to request next vsync, status=%d", status);
	            return status;
	        }
	        mwaitingforvsync = true;
	        mlastschedulevsynctime = systemtime(system_time_monotonic);
	    }
	    return ok;
	}
	// frameworks/native/libs/gui/displayeventreceiver.cpp
	status_t displayeventreceiver::requestnextvsync() {
	    if (meventconnection != nullptr) {
	        meventconnection->requestnextvsync();
	        return no_error;
	    }
	    return miniterror.has_value() ? miniterror.value() : no_init;
	}

vsync信号的分发

surfaceflinger进程通过socket通知app进程vsync信号到达之后,app进程的handleevent方法将会被调用,最终通过jni调用到java层的framedisplayeventreceiver#dispatchvsync方法。

// frameworks/native/libs/gui/displayeventdispatcher.cpp
	int displayeventdispatcher::handleevent(int, int events, void*) {
	    if (events & (looper::event_error | looper::event_hangup)) {
	        return 0; // remove the callback
	    }
	    if (!(events & looper::event_input)) {
	        return 1; // keep the callback
	    }
	    // drain all pending events, keep the last vsync.
	    nsecs_t vsynctimestamp;
	    physicaldisplayid vsyncdisplayid;
	    uint32_t vsynccount;
	    vsynceventdata vsynceventdata;
	    if (processpendingevents(&vsynctimestamp, &vsyncdisplayid, &vsynccount, &vsynceventdata)) {
	        mwaitingforvsync = false;
	        mlastvsynccount = vsynccount;
	        dispatchvsync(vsynctimestamp, vsyncdisplayid, vsynccount, vsynceventdata);
	    }
	    if (mwaitingforvsync) {
	        const nsecs_t currenttime = systemtime(system_time_monotonic);
	        const nsecs_t vsyncscheduledelay = currenttime - mlastschedulevsynctime;
	        if (vsyncscheduledelay > waiting_for_vsync_timeout) {
	            mwaitingforvsync = false;
	            dispatchvsync(currenttime, vsyncdisplayid /* displayid is not used */,
	                          ++mlastvsynccount, vsynceventdata /* empty data */);
	        }
	    }
	    return 1; // keep the callback
	}
	// frameworks/base/core/jni/android_view_displayeventreceiver.cpp
	void nativedisplayeventreceiver::dispatchvsync(nsecs_t timestamp, physicaldisplayid displayid, uint32_t count, vsynceventdata vsynceventdata) {
	    jnienv* env = androidruntime::getjnienv();
	    scopedlocalref<jobject> receiverobj(env, getreferent(env, mreceiverweakglobal));
	    scopedlocalref<jobject> vsynceventdataobj(env, getreferent(env, mvsynceventdataweakglobal));
	    if (receiverobj.get() && vsynceventdataobj.get()) {
	        env->setintfield(vsynceventdataobj.get(), gdisplayeventreceiverclassinfo.vsynceventdataclassinfo.preferredframetimelineindex, synceventdata.preferredframetimelineindex);
	        env->setintfield(vsynceventdataobj.get(), gdisplayeventreceiverclassinfo.vsynceventdataclassinfo.frametimelineslength, vsynceventdata.frametimelineslength);
	        env->setlongfield(vsynceventdataobj.get(), gdisplayeventreceiverclassinfo.vsynceventdataclassinfo.frameinterval, vsynceventdata.frameinterval);
	        scopedlocalref<jobjectarray> frametimelinesobj(env, reinterpret_cast<jobjectarray>(env->getobjectfield(vsynceventdataobj.get(), gdisplayeventreceiverclassinfo.vsynceventdataclassinfo.frametimelines)));
	        for (size_t i = 0; i < vsynceventdata.frametimelineslength; i++) {
	            vsynceventdata::frametimeline& frametimeline = vsynceventdata.frametimelines[i];
	            scopedlocalref<jobject>
	                    frametimelineobj(env, env->getobjectarrayelement(frametimelinesobj.get(), i));
	            env->setlongfield(frametimelineobj.get(),
	                              gdisplayeventreceiverclassinfo.frametimelineclassinfo.vsyncid,
	                              frametimeline.vsyncid);
	            env->setlongfield(frametimelineobj.get(),
	                              gdisplayeventreceiverclassinfo.frametimelineclassinfo
	                                      .expectedpresentationtime,
	                              frametimeline.expectedpresentationtime);
	            env->setlongfield(frametimelineobj.get(),
	                              gdisplayeventreceiverclassinfo.frametimelineclassinfo.deadline,
	                              frametimeline.deadlinetimestamp);
	        }
			// 最终调用到了java层的dispatchvsync
	        env->callvoidmethod(receiverobj.get(), gdisplayeventreceiverclassinfo.dispatchvsync, timestamp, displayid.value, count);
	        alogv("receiver %p ~ returned from vsync handler.", this);
	    }
	    mmessagequeue->raiseandclearexception(env, "dispatchvsync");
	}
// android.view.displayeventreceiver
	// called from native code.
    @suppresswarnings("unused")
    private void dispatchvsync(long timestampnanos, long physicaldisplayid, int frame,
            vsynceventdata vsynceventdata) {
        onvsync(timestampnanos, physicaldisplayid, frame, vsynceventdata);
    }
	// android.view.choreographer.framedisplayeventreceiver
	@override
    public void onvsync(long timestampnanos, long physicaldisplayid, int frame, vsynceventdata vsynceventdata) {
    	try {
            long now = system.nanotime();
            if (timestampnanos > now) {
                timestampnanos = now;
            }
            if (mhavependingvsync) {
                log.w(tag, "already have a pending vsync event.  there should only be "
                        + "one at a time.");
            } else {
                mhavependingvsync = true;
            }
            mtimestampnanos = timestampnanos;
            mframe = frame;
            mlastvsynceventdata = vsynceventdata;
            message msg = message.obtain(mhandler, this);
            msg.setasynchronous(true); // 异步消息,利用之前插入的同步屏障来加速消息的处理
            mhandler.sendmessageattime(msg, timestampnanos / timeutils.nanos_per_ms);
        } finally {
            trace.traceend(trace.trace_tag_view);
        }
	}
        @override
        public void run() {
            mhavependingvsync = false;
            doframe(mtimestampnanos, mframe, mlastvsynceventdata);
        }

最终调用到了choreographer#doframe方法,到了这里就开始了新的一帧的处理,开始下一帧的数据准备工作。

vsync信号的处理

app进程收到vsync信号之后就会调用doframe方法开始新的一帧数据的准备工作,其中还会计算卡顿时间,即vsync信号到达之后多久才被主线程处理,等待时间过长会导致无法在一帧时间内完成数据准备的工作,最终导致用户看到的视觉效果不够流畅。

// android.view.choreographer
    void doframe(long frametimenanos, int frame, displayeventreceiver.vsynceventdata vsynceventdata) {
        final long startnanos;
        final long frameintervalnanos = vsynceventdata.frameinterval;
        try {
            framedata framedata = new framedata(frametimenanos, vsynceventdata);
            synchronized (mlock) {
                if (!mframescheduled) {
                    tracemessage("frame not scheduled");
                    return; // no work to do
                }
                long intendedframetimenanos = frametimenanos;
                startnanos = system.nanotime();
                // frametimenanos是surfaceflinger传递给来的时间戳,可能会被校准为app进程接收到vsync信号的时间戳
                // jitternanos包含了handler处理消息的耗时,即异步消息被处理之前,主线程还在处理其他消息所占用的时间,如果这个时间过长会导致卡顿
                final long jitternanos = startnanos - frametimenanos;
                if (jitternanos >= frameintervalnanos) {
                    long lastframeoffset = 0;
                    if (frameintervalnanos == 0) {
                        log.i(tag, "vsync data empty due to timeout");
                    } else {
                        lastframeoffset = jitternanos % frameintervalnanos;
                        final long skippedframes = jitternanos / frameintervalnanos;
                        if (skippedframes >= skipped_frame_warning_limit) {
                            log.i(tag, "skipped " + skippedframes + " frames!  "
                                    + "the application may be doing too much work on its main "
                                    + "thread.");
                        }
                        if (debug_jank) {
                            log.d(tag, "missed vsync by " + (jitternanos * 0.000001f) + " ms "
                                    + "which is more than the frame interval of "
                                    + (frameintervalnanos * 0.000001f) + " ms!  "
                                    + "skipping " + skippedframes + " frames and setting frame "
                                    + "time to " + (lastframeoffset * 0.000001f)
                                    + " ms in the past.");
                        }
                    }
                    frametimenanos = startnanos - lastframeoffset;
                    framedata.updateframedata(frametimenanos);
                }
                if (frametimenanos < mlastframetimenanos) {
                    if (debug_jank) {
                        log.d(tag, "frame time appears to be going backwards.  may be due to a "
                                + "previously skipped frame.  waiting for next vsync.");
                    }
                    tracemessage("frame time goes backward");
                    schedulevsynclocked();
                    return;
                }
                if (mfpsdivisor > 1) {
                    long timesincevsync = frametimenanos - mlastframetimenanos;
                    if (timesincevsync < (frameintervalnanos * mfpsdivisor) && timesincevsync > 0) {
                        tracemessage("frame skipped due to fpsdivisor");
                        schedulevsynclocked();
                        return;
                    }
                }
                mframeinfo.setvsync(intendedframetimenanos, frametimenanos,
                        vsynceventdata.preferredframetimeline().vsyncid,
                        vsynceventdata.preferredframetimeline().deadline, startnanos,
                        vsynceventdata.frameinterval);
                mframescheduled = false;
                mlastframetimenanos = frametimenanos;
                mlastframeintervalnanos = frameintervalnanos;
                mlastvsynceventdata = vsynceventdata;
            }
			// 开始执行各种类型的callback
            animationutils.lockanimationclock(frametimenanos / timeutils.nanos_per_ms);
            mframeinfo.markinputhandlingstart();
            docallbacks(choreographer.callback_input, framedata, frameintervalnanos);
            mframeinfo.markanimationsstart();
            docallbacks(choreographer.callback_animation, framedata, frameintervalnanos);
            docallbacks(choreographer.callback_insets_animation, framedata,
                    frameintervalnanos);
            mframeinfo.markperformtraversalsstart();
            docallbacks(choreographer.callback_traversal, framedata, frameintervalnanos);
            docallbacks(choreographer.callback_commit, framedata, frameintervalnanos);
        } finally {
            animationutils.unlockanimationclock();
            trace.traceend(trace.trace_tag_view);
        }
    }

接着,就会将之前业务提交的各种类型的callbackrecord,主要分为:

  • callback_input:输入类型,比如屏幕触摸事件,最先执行;
  • callback_animation:动画类型,比如属性动画;
  • callback_insets_animation:背景更新动画;
  • callback_traversal:布局绘制类型,view的测量、布局和绘制等;
  • callback_commit:提交绘制数据;

按照顺序执行完所有的callbackrecord之后,app进程的绘制任务就完成了,最终数据提交到graphicbuffer中,最终调度sf类型的vsync信号,最后由surfaceflinger完成数据合成和送显。

总结一下,vsync信号的分发处理流程:

总结

整个choreographer工作机制作为app进程和surfaceflinger进程的协调机制,承接app进程的业务刷新ui的请求,统一调度vsync信号,将ui渲染任务同步到 vsync 信号的时间线上。同时作为中转站来分发vsync信号,并处理上层业务的刷新请求。按照 vsync 信号的周期有规律地准备每一帧数据,并通过surfaceflinger进程完成合成上屏。

到此这篇关于浅析android中的choreographer工作原理的文章就介绍到这了,更多相关android choreographer工作原理内容请搜索代码网以前的文章或继续浏览下面的相关文章希望大家以后多多支持代码网!

(0)

相关文章:

版权声明:本文内容由互联网用户贡献,该文观点仅代表作者本人。本站仅提供信息存储服务,不拥有所有权,不承担相关法律责任。 如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 2386932994@qq.com 举报,一经查实将立刻删除。

发表评论

验证码:
Copyright © 2017-2025  代码网 保留所有权利. 粤ICP备2024248653号
站长QQ:2386932994 | 联系邮箱:2386932994@qq.com