Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add MappingAudioSource for just-in-time audio source creation #779

Open
wants to merge 8 commits into
base: minor
Choose a base branch
from

Conversation

hacker1024
Copy link

@hacker1024 hacker1024 commented Jul 23, 2022

This PR implements some of the ideas discussed in #777.

API additions

MappingAudioSource, a new type of AudioSource, has been added. On supported platforms (more on that later), it allows the creation of an AudioSource to be deferred until it is truly needed by the player (for preloading or playback purposes).

This is useful for APIs that require a secondary request to retrieve an audio URL. For example:

final apiService = ApiService();

final queue = await apiService.getQueue();
final mappingAudioSources = [
  for (final queueItem in queue)
      MappingAudioSource(queueItem, (queueItem) async => AudioService.uri(await apiService.getUrl(queueItem)));
];

Any type of IndexedAudioSource can be lazily created in this fashion.

If there's a problem creating the AudioSource, null can be returned. This causes an empty placeholder to be used, causing the player to move on to the next item (or stop).

The availability of MappingAudioSource on the current platform can be checked with the static MappingAudioSource.supportedOnCurrentPlatform getter.

Platform interface changes

The platform interface has been changed in a non-breaking manner.

  • MappingAudioSourceMessage has been added
  • The main platform interface now takes a getAudioSourceMessage function as a constructor argument. This function should return an AudioSource associated with an ID. It is an optional named argument, and will use a placeholder that throws when it is called if the real implementation is not provided.
  • The main platform interface has gained a supportsMappingAudioSource getter that is false by default.

Implementation details

Deferred AudioSource creation

This feature has been implemented by adding new audio source players (AudioSourcePlayer on the Web, MediaSource on Android) that create, and start using, an underlying audio source player when their prepare/load method is first called.

The MethodChannel platform interface implementation has implemented a "createMappedAudioSourceSource" API to facilitate this. When given a MappingAudioSource ID, it creates the underlying AudioSource and returns its message representation.

Placeholder media when null is returned

If a problem is encountered creating the AudioSource in the MappingAudioSource, null should be returned. When this happens, an empty audio source is used. This is implemented with:

Supported platforms

As the typical implementation of this feature relies on asynchronous load/prepare methods in platform audio player APIs, supporting it is not feasible on platforms like macOS and iOS.

I have experimented with taking advantage of the proxy functionality to allow deferred URLs to be used in a similar way, but this isn't very useful at the moment as macOS (and I assume iOS) make a HTTP request as soon as the source is added, even if it's not meant to be played yet.

@ryanheise
Copy link
Owner

macOS (and I assume iOS) make a HTTP request as soon as the source is added, even if it's not meant to be played yet.

That behaviour can be changed. When adding items to a playlist on the Dart side, the current implementation immediately adds those items to the queue, but that behaviour can be changed to lazily add items to the queue only just before they're needed. I have experimented this on the feature/treadmill branch although the laziness is not as aggressive as you may want it to be. Again that can be changed by programming it to add the next item to the queue when coming within a certain number of seconds of the end of the previous item.

Also, since you are using prepareSource on Android which can only be called once, then it isn't the same as I had thought where it could be misused to create a jukebox. I think it's a good idea to expose this sort of "prepare" API and from memory this sort of just-in-time preparation has come up in another request before. I would just want to make sure this solution serves both use cases, so I'll need to go and chase that down.

@hacker1024 hacker1024 marked this pull request as draft July 23, 2022 17:16
@hacker1024
Copy link
Author

I just realized that the setup function is not being called when the audio source is created. That'll need fixing before this is merged - marking as a draft for now.

@ryanheise
Copy link
Owner

Also regarding the fact that prepareSource is only called once, the reason I brought that up is that the name "lazy" actually makes more sense now than if it had been implemented in a platform independent manner using the proxy.

By the way, I'm still not 100% convinced that this couldn't be done using the the platform independent proxy and StreamAudioSource. After all, you have as little choice over when the next item is prepared as you do when the next StreamAudioSource's "request" method is called. Can you explain it? If macOS/iOS is the only reason, what difference does it make if macOS/iOS isn't currently supported?

@hacker1024
Copy link
Author

Also regarding the fact that prepareSource is only called once, the reason I brought that up is that the name "lazy" actually makes more sense now than if it had been implemented in a platform independent manner using the proxy.

This is true, but I ended up liking the name Mapping. This can of course be changed.

By the way, I'm still not 100% convinced that this couldn't be done using the the platform independent proxy and StreamAudioSource. After all, you have as little choice over when the next item is prepared as you do when the next StreamAudioSource's "request" method is called. Can you explain it? If macOS/iOS is the only reason, what difference does it make if macOS/iOS isn't currently supported?

This was actually my initial approach, but I ended up moving away from it for a few reasons.

  1. Web support. To my understanding, the Web implementation of StreamAudioSource buffers everything at once and converts it to a base64 data URI. With this implementation of MappingAudioSource, though, that isn't done and the audio is buffered normally.
  2. Error handling. What should happen if there's a network problem when fetching the data to be sent through the proxy? What about when there's a problem fetching the URL to begin with? It all gets quite complex.
  3. Flexibility and code reuse. This implementation allows any kind of IndexedAudioSource to be provided late. A StreamAudioSource-based solution would only work with ProgressiveAudioSources, as that would be the type initially given to the platform. If the audio wants to be transformed (as ClippingAudioSource does), that would require a Dart implementation as well. This implementation allows the existing platform implementation for each AudioSource to be used.

@hacker1024 hacker1024 marked this pull request as ready for review July 24, 2022 04:33
@ryanheise
Copy link
Owner

Here is the other related issue/comment: #294 (comment)

I like that naming of calling it a "childBuilder" or something of the sort, since it fits with Flutter's pattern of builders. But more generally, it is tempting me again back to this idea of trying to implement things more generally on the Dart side. The platform implementations currently have to do a lot of stuff and so maintaining platform support is quite tricky, especially surrounding all the different sorts of media sources. I think that since all platforms should want to have lazy loading and now this idea of child builders, it becomes even more clear that the platform doesn't really need to know all the details. The prepare phase could be made platform independent by just detecting on the dart side when approaching the end of the current item and triggering the Dart callback. Then the platform side doesn't really need to manage the entire playlist, it only needs to manage a lookahead buffer large enough to support gapless playback. To further simplify all the different sorts of media sources, I should also eventually not have a separate type for clipping audio sources, but just make it so that all UriAudioSources have a clipStart and clipEnd which can be null. LoopingAudioSource can also be handled completely on the Dart side since it merely involves duplication. In terms of backwards compatibility of the platform interface, I guess that we can use type clipping instead of progressive.

I know that doesn't help progress this issue much, in fact it would set things back considerably. So maybe that's an idea for later and I have to think about what can be done now, and where to head to in the future.

@hacker1024
Copy link
Author

Then the platform side doesn't really need to manage the entire playlist, it only needs to manage a lookahead buffer large enough to support gapless playback.

Once this is implemented, adding this lazy feature sounds fairly trivial.

I know that doesn't help progress this issue much, in fact it would set things back considerably. So maybe that's an idea for later and I have to think about what can be done now, and where to head to in the future.

Indeed, these architectural changes sound great but will also take some time.

If the MappingAudioSource API is likely to change when these changes are made, then perhaps this PR should not be merged yet. I don't think that's very likely, though, as it's a pretty basic API that would probably not need to be changed - so I suggest this can be merged now, and then as the changes you mention are made, support can be added for other platforms.

@hacker1024
Copy link
Author

hacker1024 commented Jul 24, 2022

Actually, now that I think about it, MappingAudioSource may not be needed at all once the Dart-side treadmill is implemented.

The ConcatenatingAudioSource could take a list of functions instead, and then call them to create the AudioSource before adding it. If there's a problem and the function returns null, it just won't be added.

final audioSourceFactories = [
  () async => AudioSource.uri(await apiService.getUrl(queue[0]))),
  () async => AudioSource.uri(await apiService.getUrl(queue[1]))),
  () async => AudioSource.uri(await apiService.getUrl(queue[2]))),
];

This may cause problems with the sequence getter, though.

@austinried
Copy link

Just to chime in here, I actually ended up implementing something similar to this for my project which I'm using at the moment but is definitely not ready for a PR, but I took a slightly different route. Since I only support Android currently, I took advice from the ExoPlayer issue I found and implemented the ResolvingAudioSource: google/ExoPlayer#10200 (comment)

You can see my changes so far here if you're curious (ignore the commented out error track skipping, that's just some behavior I wasn't a fan of): https://github.com/ryanheise/just_audio/compare/just_audio-v0.9.24...austinried:just_audio:subtracks?expand=1

For my use case, the only thing I really want to be able to load just-in-time is the URL, because it's the only part that would require re-creating the media source. The other bit that I load just-in-time is the image, but I can update the metadata (using audio_service currently) separately without involving the player for that. I had tried earlier methods here that involved replacing the source in the queue from the dart side as I loaded things but that lead the some undesirable side effects in the player.

Anyway, just wanted to see if you had considered ResolvingAudioSource for this or if it could be improved by using it.

@hacker1024
Copy link
Author

Thanks @austinried, I did consider ResolvingAudioSource, but as the logic would have been very similar to what was required in this implementation I just decided to go for allowing the entire AudioSource to be provided late.

@ryanheise
Copy link
Owner

Interesting. I guess they both achieve the same result, but what a great name: ResolvingAudioSource.

@ryanheise
Copy link
Owner

So how about it - shall we agree on ResolvingAudioSource or would you prefer to keep the current name?

@shuyec
Copy link

shuyec commented Aug 19, 2022

Hello @hacker1024, I've copied MappingAudioSource locally and I'm trying to use it. I'm having a hard time understanding how to get the tag of the AudioSource generated by MappingAudioSource. I used to get it using the currentSource of sequenceStateStream , but now it does not work because it's an instance of MappingAudioSource<dynamic>

@ryanheise
Copy link
Owner

I'm having a hard time understanding how to get the tag of the AudioSource generated by MappingAudioSource.

Why do you say the tag is generated? I don't think this PR changes anything about the way tags normally work in normal usage of just_audio.

@shuyec
Copy link

shuyec commented Aug 20, 2022

Why do you say the tag is generated?

I have something like the example set up on this player and using a local Flask server.

      _playlist = ConcatenatingAudioSource(children: []);
       final mappingAudioSources = [
        for (final queueItem in queue)
          MappingAudioSource(
            queueItem,
            ((queueItem) async {
              if (queueItem != null) {
                queueItem = queueItem as Map;
                AudioMetadata? queueSong = await getMedia(videoId: queueItem["videoId"]);
                if (queueSong != null) {
                  queueSongData = {
                    "title": queueSong.title,
                    "songUrl": queueSong.mediaUrl,
                  };
                  return AudioSource.uri(Uri.parse(queueSong.mediaUrl), tag: queueSongData);
                }
              }
            }),
          )
      ];
      _playlist.addAll(mappingAudioSources);

Something like this throws an error

_audioPlayer.sequenceStateStream.listen((sequenceState) {
      if (sequenceState == null) return;
      // update current song title
      final currentItem = sequenceState.currentSource;
      print("DEBUG: currentItem is $currentItem");     <-- DEBUG: currentItem is Instance of 'MappingAudioSource<dynamic>'
      final title = currentItem?.tag["title"] as String?;     <-- NoSuchMethodError: The method '[]' was called on null. 
      currentSongTitleNotifier.value = title ?? '';

If I add them normally through a loop and _playlist.add, currentItem is an Instance of 'ProgressiveAudioSource' which has the property tag.

I also noticed that, if you seek near the end using the progress bar, and do _audioPlayer.durationStream.listen((totalDuration) sometimes (it doesn't always happen) totalDuration is null. The audio pIays fine, so the request worked. I thought it was my local server, but it doesn't happen with a for loop and adding them one by one using _playlist.add(AudioSource.uri(Uri.parse(queueSong.mediaUrl), tag: queueSongData));.

@ryanheise
Copy link
Owner

OK, so I don't think you explained how the tag was generated "by" MappingAudioSource. The tag is not generated by the plugin, it is supplied by you in the constructor. You didn't actually supply a tag to MappingAudioSource in the constructor which is why it is null. You could use a Completer to produce a future value for that tag parameter.

@hacker1024 , there is an interesting API design question here as to whether the tag of the outer audio source should be overridden to automatically delegate to the tag of the inner audio source...

@cooaer
Copy link

cooaer commented Aug 25, 2022

Hi @hacker1024 , I have implemented a ResolvingAudioSource using dart without platform code, which resolves song url before playback with headers. This is a simple implementation and I hope it will help you.

@hacker1024
Copy link
Author

Thanks for the PR @cooaer. I did experiment with this method as well, as I mentioned in my first post. My implementation ended up being very similar to yours.

This approach has a few drawbacks, though:

  • It does not support composing AudioSources - a ClippingAudioSource, for example, does not work here.
  • It (as far as I can tell) uses dart:io, and will not run on the Web.
  • Manually making HTTP requests in Dart isn't ideal. The behaviour (regarding proxy servers, SSL certificates, error handling, etc.) will not match other AudioSources that use network resources through platform-specific methods.
  • Streaming data with Dart (in the main isolate as well!) just to send it through to the system is rather inefficient.

@hacker1024
Copy link
Author

So how about it - shall we agree on ResolvingAudioSource or would you prefer to keep the current name?

Sorry for the delay - I've been quite busy lately. I quite like the name ResolvingAudioSource, I'll update my PR.

@hacker1024
Copy link
Author

Ultimately, I think the best solution to this problem is to implement the "treadmill" as discussed. The queue should not be managed entirely by the platform; instead just_audio should respond to playback events to add/start loading the next AudioSource when it is required. Once this is implemented, a ResolvingAudioSource implementation becomes trivial.

That feature will take some effort, though. In the meantime, if a temporary solution was to be merged, I like my implementation for the reasons I have stated. It does add quite a bit of platform functionality, though, which may not be ideal.

@ryanheise
Copy link
Owner

Ultimately, I think the best solution to this problem is to implement the "treadmill" as discussed.

There might actually be another way to do it on the Dart side that would be quicker to implement than waiting for the treadmill. Basically, the only thing that the Dart-side solution needs is the trigger event for when to resolve the audio source. And that is something we already have, in theory, because we can listen to when the current position is approaching the end of the current indexed audio source. The only design issue is how to encapsulate this logic inside of the ResolvingAudioSource class. What I think we could do is add some lifecycle methods directly to StreamAudioSource such as onPrepare, and that should provide enough of an infrastructure for this type of Dart-only solution to work.

@Small-World
Copy link

excuse me! Will this branch be merged?@ryanheise

@ryanheise
Copy link
Owner

Will this branch be merged?@ryanheise

My preference would be for a solution that does not involve changes to the platform interface that would need to be reverted in the future, since part of the design of the federated plugin architecture unfortunately makes it so that we can never make breaking changes - we must therefore try to get the platform interface correct the first time.

So this issue is still open until we can find a solution that meets this requirements.

My previous comment suggests one way to proceed, but until then you are free to actually use this PR directly (or #800).

@Small-World
Copy link

okay, thank you

@mr-mmmmore
Copy link

Hello, what is the status of this topic? I have a use case where I need to sign the url just before starting the play.

I've read the discussion and don't get everything since I haven't looked at the code. But regarding the other way you are thinking of @ryanheise, I think that relying on the approaching end of the current source has drawbacks: it is assuming that we know what is the next audio source to be played. In my use case I don't: there is a playlist and by default all sources are played sequentially, but the user can also decide at any time what source to play.

So if you are to trigger a new event that can be used to override the URL I think that should be just after the new source to play is set but before any actual handling of the audio is done. Does it make sense to you?

@ryanheise
Copy link
Owner

So if you are to trigger a new event that can be used to override the URL I think that should be just after the new source to play is set but before any actual handling of the audio is done. Does it make sense to you?

I'm not sure if I understand exactly, the point in using ConcatenatingAudioSource is to get gapless playback, and so the next item needs to start buffering before it actually starts to play. The idea of this new feature would be to resolve the URL "just in time" which in a gapless playlist means just as the previous item is almost finished playing and the next item is about to start. If that is insufficient, then you probably don't want to use a ConcatenatingAudioSource and then you are also not constrained in your solution because you would basically implement your own loop to play each item one after the other, and somewhere in your own loop logic you could implement your requirement to switch URLs on the fly. That is, if you're not confined to using ConcatenatingAudioSource, you won't require the solution to be built into the plugin.

@mr-mmmmore
Copy link

@hacker1024 The proposal from @cooaer could fit for my use case because I don't need the web but I need it to work on both Android and iOS, and I don't compose audio. But I'd like to have a better understanding of its drawbacks, could you please elaborate more about these:

Manually making HTTP requests in Dart isn't ideal. The behaviour (regarding proxy servers, SSL certificates, error handling, etc.) will not match other AudioSources that use network resources through platform-specific methods.

What kind of problem could it raise?
For example if my app only plays audio using this behaviour all audio source will be handled the same, is there really a problem?

Streaming data with Dart (in the main isolate as well!) just to send it through to the system is rather inefficient.

You mean streaming the actual audio source data? When you say inefficient, could it cause performance issues when playing the audio?

Thank you

@ryanheise
Copy link
Owner

It involves more CPU because every byte of data from the audio source is then going through an extra layer of redirection. So rather than just reading every byte from the source once and then buffering it, it reads it, then redirects it, then reads it again, then buffers it. (Speaking of which, it would be a good idea at some point to move the proxy code into an isolate.)

But you don't need to do any of that to implement this. A couple of solutions have been mentioned above that can be implemented in Dart without plugin support, and which don't involve adding that extra layer of streaming data.

@Chaphasilor
Copy link

to resolve the URL "just in time" which in a gapless playlist means just as the previous item is almost finished playing and the next item is about to start

Just wanted to chime in and ask if this really is how it should work? We have a buffer duration after all, so ideally the next indexed source should be resolved once it needs to be buffered, no matter how far along the currently playing source is, right?
Or am I missing something here?

@ryanheise
Copy link
Owner

ryanheise commented Nov 26, 2023

You are correct in principle, although the goal of this use case is to also make it buffer the next item as late as possible, so you will probably also want to pass in the appropriate load control parameters into the constructor for iOS and Android to influence the buffering behaviour (although keep in mind that iOS doesn't seem to be very cooperative to these settings).

@mr-mmmmore
Copy link

But you don't need to do any of that to implement this. A couple of solutions have been mentioned above that can be implemented in Dart without plugin support, and which don't involve adding that extra layer of streaming data.

The reason we've been using the ConcatenatingAudioSource is because we also use just_audio_background and it seems it relies on this class to handle playlist. Do you think we could implement our own handling of the playlist in dart and still benefit from just_audio_background? For example by subclassing IndexedAudioSource to add our own just-in-time URI resolution logic?

@ryanheise
Copy link
Owner

I still don't understand your use case and how you can't know what is to play next just before it's about to play. But in any case, it sounds like you have more advanced requirements than can be served by just audio background and you should be using audio service instead. There is no way to get a playlist in just audio background without coinciding with that gapless playback mechanism that you want to avoid.

@spakanati
Copy link

Now that the treadmill branch seems to be nearing merge, is there a better way to hook into the trigger event to resolve the audio URL (mentioned here)?

@ryanheise
Copy link
Owner

Not really, the treadmill branch is only an intermediate solution. It triggers the loading of the next item as soon as the current item is enqueued, but what we want here is the delay loading even further until just before the next item is about to be played.

@spakanati
Copy link

Ok, thank you. I was hoping to use one of these resolving solutions but MappingAudioSource doesn't support iOS yet and the dart-only ResolvingAudioSource doesn't work with the visualizer. I'll try to dig into the source code to try to understand if there's any way to work around that (it seems visualizer currently only works when using AudioSource.uri and the dart-only ResolvingAudioSource isn't composable as mentioned above).

@ryanheise
Copy link
Owner

Certainly I think a Dart-only solution is possible even today, by listening to the position stream and triggering your loading event when the position is near the end of the current item. If you need something today, that is certainly the way to go.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants