The actual service runs as a thread of the mediaserver daemon, but this is merely an implementation detail. Even if you write a native application, it would use OpenSL ES implementation which goes through AudioFlinger. Applications generally play audio via layers built on top of AudioFlinger. It does not depend on ALSA, but instead allows for a sort of HAL that vendors can implement any way they choose. It provides an API for playback/recording as well as a control mechanism for implementing policy. AudioFlinger was written from scratch for Android. In the other corner, we have Android’s native audio system - AudioFlinger. PulseAudio runs as a daemon, and clients usually use the libpulse library to communicate with it. I won’t rehash all of these, but this includes a nice modular framework, a bunch of power saving features, flexible routing, and lots more. It sits on top of ALSA which provides a unified way to talk to the audio hardware and provides a number of handy features that are useful on desktops and embedded devices. For those who don’t know, PulseAudio is pretty much a de-facto standard part of the Linux audio stack.
#PULSEAUDIO VS ALSA ANDROID#
Recently, I got some time here at Collabora to give it a go - that is, to get PulseAudio running on an Android device and see how it compares with Android’s AudioFlinger. So to sum up, in your typical system these days, ALSA talks directly to your sound cards, and Pulseaudio talks to your apps and programs and feeds that into ALSA.I’ve been meaning to try this for a while, and we’ve heard a number of requests from the community as well.
#PULSEAUDIO VS ALSA SOFTWARE#
PulseAudio is a software mixer, on top of the userland (like you'd run an app). Of course, there is ' dmix', which was written to solve this problem. ALSA by itself can only handle one application at a time. ALSA is the kernel level sound mixer, it manages your sound card directly.