Sunday, November 14, 2010

DRM integration in Stagefright

Google has pushed a change into AOSP which integrates DRM support into Stagefright recently:
http://android.git.kernel.org/?p=platform/frameworks/base.git;a=commitdiff;h=d5770917a50a828cb4337c2a392b3e4a375624b9#patch12

Before we delve into the change, there are basically two types of DRM schemes:
1. All of the data was stored under a uniform encryption layer (which is defined as DecryptApiType::CONTAINER_BASED in drm framework currently);
2. Encrypted data is embedded within a plain-text container format, so it can be decrypted packet by packet, thus also applicable for progressive download and streaming.

The entire commit is mainly composed of the following parts:
1)Extension in DataSource interface:
+    // for DRM
+    virtual DecryptHandle* DrmInitialization(DrmManagerClient *client) {
+        return NULL;
+    }
+    virtual void getDrmInfo(DecryptHandle **handle, DrmManagerClient **client) {};

FileSource implements those APIs, where it communicates with DRM service to open a decryption session. For CONTAINER based protection (e.g. OMA DRM v1), FileSource intercepts readAt() function and return the decrypted data transparently to its client -- file parser, which is thereby ignorant of the underlying protection mechanism.

2)Extension in MediaExtractor interface:
+    // for DRM
+    virtual void setDrmFlag(bool flag) {};
+    virtual char* getDrmTrackInfo(size_t trackID, int *len) {
+        return NULL;
+    }
The above APIs are used to retrieve context data, such as Protection Scheme Information Box in mp4/3gp files, to properly initialize the corresponding DRM plugin.

3)DRMExtractor and the DRM format sniffer:
SniffDRM is registered to identify DRM-protected files and its original container format. As mentioned, for CONTAINER_BASED encryption, FileSource handles data decryption transparently for parsers. For the other scheme, which are encrypted with sample/NAL as the basic unit, DRMExtractor is created to wrap the original MediaExtractor, to decrypt data after reading each sample/NAL from the original extractor. In this way, DRM related stuff is separated from actual file parsers.
However, as DRM is usually an extension based on the underlying container format, so it may not be as easily decoupled from file parser when it comes to other protection schemes. For example, Microsoft's PIFF extension to ISO base media format requires IV for each sample, and details of sub-sample encryption info if applicable, etc. Besides, it also imposes duplicate logic in DRM service to recognize the original container format for non-CONTAINER_BASED encryption.

4)Misc:
-changes in AwesomePlayer for rights management;
-changes in MPEG4Extractor to retrieve "sinf";
-etc.

Thursday, September 30, 2010

RTSP in Stagefright (2)

A basic sequence diagram of setting up RTSP connection in Stagefright:

Tuesday, September 28, 2010

RTSP in Stagefright (1)

RTSP support was added into Stagefright on GingerBread release:
  • A preliminary foundation module, provides a generic way to asynchronously handle commands and events/messages sequentially in the schedule thread. Each message has a target id, indicating its corresponding handler, which is then registered in a looper. A looper spawns a thread, and schedules registered handlers to process any messages posted to it.
  • ARTSPController acts as MediaExtractor for RTSP playback, which then delegates RTSP related stuff to ARTSPConnection, and payload parsing related to ARTPConnection/ARTPSource. ARTPSource leverages ARTPAssembler to re-assemble RTP packets to frames. AAMRAssembler, AH263Assembler, etc. all inherit from ARTPAssember according to the corresponding protocols. Buffers parsed are sent back via AMessage and queued in APacketSource, which acts as the MediaSource for downstream components.
  • There is also ARTPSession and ARTPWriter which re-uses the existing RTP protocol for VOIP(Gtalk?) solution.




Sunday, July 4, 2010

Native mediatest updated for FroYo

I had almost had to update the small program for every major release, so I've uploaded it to github for better version control:
git clone http://github.com/freepine/MediaTest.git

No wonder Android engineers keep warning us from accessing unpublished code:)

Wednesday, June 2, 2010

Media framework change in Froyo

Here is a Google I/O session by Dave Sparks, who talked about new APIs of media framework introduced in Froyo, and also some light on the future of Stagefright, OpenCORE, etc in the Q&A section.

Wednesday, February 3, 2010

Analyze memory leak of Android native process

Android libc_debug.so has a built-in function to dump all heap allocations with its backtrace, which is very useful to debug memory leaks of native processes. Below are the steps summarized during my investigation of mediaserver process:
  1. apply the patch in ./frameworks/base, which registers a memory dumper service in mediaserver process, then rebuild
  2. (*)flash new system.img, replace libc.so with libc_debug.so, then reboot
    • $ adb remount
    • $ adb shell mv /system/lib/libc_debug.so /system/lib/libc.so
    • $ adb reboot
  3. run memorydumper to get the initial heap allocations of mediaserver process
    • $ adb shell /system/bin/memorydumper
  4. play several files, save the process maps during playback, then get the memory dump again
    • $ adb pull /proc/<mediaserver_pid>/maps .
    • $ adb shell /system/bin/memorydumper
  5. get the diff file of memory allocations
    • $ adb pull /data/memstatus_1136.0 .
    • $ adb pull /data/memstatus_1136.1 .
    • $ diff memstatus_1136.0 memstatus_1136.1 >diff_0_1
  6. run the script to resolve symbols from the backtrace addresses in the diff file
    • $ ./addr2func.py --root-dir=../ --maps-file=./maps diff_0_1
[Update: 07/05/2010]
In Froyo release, there is no need to replace libc.so with libc_debug.so. Try below steps instead of original step 2:
  • adb shell setprop libc.debug.malloc 1
  • adb shell ps mediaserver
  • adb shell kill <mediaserver_pid>
And I also updated the script to skip libc_debug.so while parsing symbols.

[Update: 06/22/2013]
Actually Android has a built-in service which could dump mediaserver's memory directly, so you can replace memorydumper related steps with below command. Sorry that I wasn't aware of this tool when I wrote this article :)
 #dumpsys "media.player" -m

Tuesday, January 5, 2010

Current status of Stagefright

  • Local file playback (AMR/MP3/MP4/...) and basic http playback support;
  • Video only recording;
  • A corresponding metadata retriever, not fully implemented yet.
Some thoughts on the enhancement:
  • No abstraction layer for media sink, and lack of a pluggable mechanism to pick decoder dynamically based on the source's format and sink's capability. The overall architecture doesn't seem to be flexible enough to date;
  • It's good to re-use the same MediaExtractor from player engine for metadata retrieving to avoid duplication. However, MediaExtractor itself might not get all required keys reliably, e.g. it might be more accurate to get some keys (e.g, video height/width) from decoder's output;
  • To support audio only and audio+video recording;
  • Http support is not complete, no BUFFERING status update, not asynchronous message for data insufficient/available;
  • No RTSP support yet.
Anyway, it's still under active development and it's from Google, so it wouldn't be surprising that it evolves quickly as a replacement of OpenCORE in the near future;)

Monday, January 4, 2010

An overview of Stagefright player

There is a new playback engine implemented by Google comes with Android 2.0 (i.e, Stagefright), which seems to be quite simple and straightforward compared with the OpenCORE solution.
  • MediaExtractor is responsible for retrieving track data and the corresponding meta data from the underlying file system or http stream;
  • Leveraging OMX for decoding: there are two OMX plugins currently, adapting to PV's software codec and vendor's hardware implementation respectively. And there is a local implementation of software codecs which encapsulates PV's decoder APIs directly;
  • AudioPlayer is responsible for rendering audio, it also provides the timebase for timing and A/V synchronization whenever audio track is present;
  • Depending on which codec is picked, a local or remote render will be created for video rendering; and system clock is used as the timebase for video only playback;
  • AwesomePlayer works as the engine to coordinate the above modules, and is finally connected into android media framework through the adapter of StagefrightPlayer.