servo media chicago
padenot is worried about latency for gstreamerin realtime
webkit  implementation is based on gstreamer, igalia is not concerned
entire pipeline in gstreamer is not feasible but nobody does that
gstreamer has concept of elements - chain elements together into pipeline
element could be from file, or creates sine wave, or perform convolution, etc.
each node (web audio audionode) could conceptually be a gstreamer element, but seems unnecessary
would be converting to/from gstreamer abstractions a lot, and dom is able to mutate audio nodes

what is the purpose of gstreamer instead of ffmpeg for input and per-platform audio output? gstreamer provides a bunch of foundation that we don’t need to do. it also has very nice rust bindings, direct support from developers and igalia, and implementation in another browser using same backend design.

instead, to avoid pipeline approach, will use gstreamer only for first and last part of pipeline
gstreamer handles per-platform differences - how does sandboxing an audio process affect this?
does embedder need to provide anything for gstreamer? on android, we use opensles, gstreamer trick exists to play out of the box
should we create the media graph concept to support general media besides webaudio as well? that is the plan.

does gstreamer support MSE/EME? does webkit support those? It does http://eocanha.org/blog/2016/02/18/improving-media-source-extensions-on-gstreamer-based-webkit-ports/

dependencies: just need gstreamer-rs on desktop. for android, need to run NDK build scripts that depend on binaries built for the platform. binaries are quite big (~700mb) - idea to create ndk-build script with pre-downloaded binaries, generate a single so file with all dependencies, serve the so file from github pages. servo-media pulls this file (~30mb) right now.

static or dynamically linked? static.

discussed abstraction in servo-media - have a clear plan for that. some doubts about remoting of gstreamer into another process.

breaking down the work:
  • abstraction around audio context and audio graph, audio node
  • parallelize work to create audio node implementations
  • need abstraction around “audio task” in event loop
  • audio decoding with gstreamer - decode files from URLs, memory chunks, blobs
  • playing audio with gstreamer - interleaving audio data
  • off-thread web audio in workers
  • figure out video prioritization?
  • figure out webrtc priorization?

foundation work:
  • one month with one person
  • could be parallelized for manish and ferjm
  • biggest concern is how to represent the media graph - petgraph? own implementation with garbage collection?
  • is there a way to do the work to allow higher-level pieces to be in flight at the same time?
  • could build very basic gstreamer pipeline to allow testing node implementations without full foundation - Done
  • prioritize nodes to be implemented

contracting:
  • gstreamer specifics - element that takes result from webaudio pipeline and outputs it; decoding parts
  • media APIs - webRTC? media elements? video?

other work to prioritize:
  • profiling? measuring audio latency?
  • state of testing in web-platform-tests and Firefox?
  • NDK 12b could be a problem