Unified Dataflow Framework Meeting Agenda

  • Explain the current state of things
  • We currently have two wholly separate APIs for dataflow, one for bit-vector problems and one that allows arbitrary transfer functions but still restricts the lattice to a powerset of indices.
  • I want to merge these two to make code more DRY.
  • The prototype implementation is described pretty thoroughly in the design doc, but maybe go over it a bit?

  • The prototype is not very ambitious. It mostly preserves the existing BitDenotation API  but extends it to arbitrary transfer functions.
  • Do we want to maintain the status quo? Have two separate implementations forever? Once arbitrary lattices are supported, maybe they won’t fit as nicely into the same API.
  • OTOH, adding backwards analysis would have to be done in two places.
  • There have been more ambitious changes attempted. Perhaps we want to do one of these instead? Or at least incorporate some of their ideas into a framework that hews closer to the existing one.
  • @eddyb was looking into a simultaneous forwards/backwards dataflow-like thing for NRVO (link?)

  • Discuss the prototype in detail (If we decide we still want to go this way)
  • How do we feel about the {before_,}*_effect naming convention? It takes a bit of getting used to, and doesn’t extend nicely to backwards dataflow.
  • OTOH, having a canonical statement_effect which most implementers want to use and a secondary, prefixed effect which is only rarely used is good.
  • Should we be passing a mir::Statement to the statement effect methods? What about mir::Body to bits_per_block?
  • Should we use specialization to implement this?’
  • Should the Analysis methods take &self or &mut self?
  • Some may depend on the results of an earlier dataflow analysis and want to hold a cursor.
  • However, transfer functions should be pure for a given MIR, so maybe forcing those analyses to use interior mutability is appropriate.
  • Are we okay with a ResultsCursor becoming the common way to inspect results?

  • Discuss what extensions to the framework we want to prioritize
  • Arbitrary lattices (would make MIR const-propagation more powerful)
  • OTOH, the current linear const-propagation gets most of the benefits without the overhead of dataflow.
  • Optimizations:
  • Don’t cache block transfer functions on acyclic MIR (reduces memory usage)
  • Extended basic blocks? @Mark-Simulacrum