-
-
Notifications
You must be signed in to change notification settings - Fork 689
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add MappingAudioSource for just-in-time audio source creation #779
base: minor
Are you sure you want to change the base?
Add MappingAudioSource for just-in-time audio source creation #779
Conversation
Inspiration taken from: - google/ExoPlayer#7087 - google/ExoPlayer#5883 - google/ExoPlayer#7279
That behaviour can be changed. When adding items to a playlist on the Dart side, the current implementation immediately adds those items to the queue, but that behaviour can be changed to lazily add items to the queue only just before they're needed. I have experimented this on the Also, since you are using prepareSource on Android which can only be called once, then it isn't the same as I had thought where it could be misused to create a jukebox. I think it's a good idea to expose this sort of "prepare" API and from memory this sort of just-in-time preparation has come up in another request before. I would just want to make sure this solution serves both use cases, so I'll need to go and chase that down. |
I just realized that the setup function is not being called when the audio source is created. That'll need fixing before this is merged - marking as a draft for now. |
Also regarding the fact that By the way, I'm still not 100% convinced that this couldn't be done using the the platform independent proxy and StreamAudioSource. After all, you have as little choice over when the next item is prepared as you do when the next StreamAudioSource's "request" method is called. Can you explain it? If macOS/iOS is the only reason, what difference does it make if macOS/iOS isn't currently supported? |
This is true, but I ended up liking the name
This was actually my initial approach, but I ended up moving away from it for a few reasons.
|
Here is the other related issue/comment: #294 (comment) I like that naming of calling it a "childBuilder" or something of the sort, since it fits with Flutter's pattern of builders. But more generally, it is tempting me again back to this idea of trying to implement things more generally on the Dart side. The platform implementations currently have to do a lot of stuff and so maintaining platform support is quite tricky, especially surrounding all the different sorts of media sources. I think that since all platforms should want to have lazy loading and now this idea of child builders, it becomes even more clear that the platform doesn't really need to know all the details. The prepare phase could be made platform independent by just detecting on the dart side when approaching the end of the current item and triggering the Dart callback. Then the platform side doesn't really need to manage the entire playlist, it only needs to manage a lookahead buffer large enough to support gapless playback. To further simplify all the different sorts of media sources, I should also eventually not have a separate type for clipping audio sources, but just make it so that all UriAudioSources have a clipStart and clipEnd which can be null. LoopingAudioSource can also be handled completely on the Dart side since it merely involves duplication. In terms of backwards compatibility of the platform interface, I guess that we can use type I know that doesn't help progress this issue much, in fact it would set things back considerably. So maybe that's an idea for later and I have to think about what can be done now, and where to head to in the future. |
Once this is implemented, adding this lazy feature sounds fairly trivial.
Indeed, these architectural changes sound great but will also take some time. If the |
Actually, now that I think about it, The final audioSourceFactories = [
() async => AudioSource.uri(await apiService.getUrl(queue[0]))),
() async => AudioSource.uri(await apiService.getUrl(queue[1]))),
() async => AudioSource.uri(await apiService.getUrl(queue[2]))),
]; This may cause problems with the |
Just to chime in here, I actually ended up implementing something similar to this for my project which I'm using at the moment but is definitely not ready for a PR, but I took a slightly different route. Since I only support Android currently, I took advice from the ExoPlayer issue I found and implemented the You can see my changes so far here if you're curious (ignore the commented out error track skipping, that's just some behavior I wasn't a fan of): https://github.com/ryanheise/just_audio/compare/just_audio-v0.9.24...austinried:just_audio:subtracks?expand=1 For my use case, the only thing I really want to be able to load just-in-time is the URL, because it's the only part that would require re-creating the media source. The other bit that I load just-in-time is the image, but I can update the metadata (using audio_service currently) separately without involving the player for that. I had tried earlier methods here that involved replacing the source in the queue from the dart side as I loaded things but that lead the some undesirable side effects in the player. Anyway, just wanted to see if you had considered |
Thanks @austinried, I did consider |
Interesting. I guess they both achieve the same result, but what a great name: |
So how about it - shall we agree on |
Hello @hacker1024, I've copied |
Why do you say the tag is generated? I don't think this PR changes anything about the way tags normally work in normal usage of just_audio. |
I have something like the example set up on this player and using a local Flask server.
Something like this throws an error
If I add them normally through a loop and I also noticed that, if you seek near the end using the progress bar, and do |
OK, so I don't think you explained how the tag was generated "by" @hacker1024 , there is an interesting API design question here as to whether the tag of the outer audio source should be overridden to automatically delegate to the tag of the inner audio source... |
Hi @hacker1024 , I have implemented a ResolvingAudioSource using dart without platform code, which resolves song url before playback with headers. This is a simple implementation and I hope it will help you. |
Thanks for the PR @cooaer. I did experiment with this method as well, as I mentioned in my first post. My implementation ended up being very similar to yours. This approach has a few drawbacks, though:
|
Sorry for the delay - I've been quite busy lately. I quite like the name |
Ultimately, I think the best solution to this problem is to implement the "treadmill" as discussed. The queue should not be managed entirely by the platform; instead That feature will take some effort, though. In the meantime, if a temporary solution was to be merged, I like my implementation for the reasons I have stated. It does add quite a bit of platform functionality, though, which may not be ideal. |
There might actually be another way to do it on the Dart side that would be quicker to implement than waiting for the treadmill. Basically, the only thing that the Dart-side solution needs is the trigger event for when to resolve the audio source. And that is something we already have, in theory, because we can listen to when the current position is approaching the end of the current indexed audio source. The only design issue is how to encapsulate this logic inside of the |
excuse me! Will this branch be merged?@ryanheise |
My preference would be for a solution that does not involve changes to the platform interface that would need to be reverted in the future, since part of the design of the federated plugin architecture unfortunately makes it so that we can never make breaking changes - we must therefore try to get the platform interface correct the first time. So this issue is still open until we can find a solution that meets this requirements. My previous comment suggests one way to proceed, but until then you are free to actually use this PR directly (or #800). |
okay, thank you |
Hello, what is the status of this topic? I have a use case where I need to sign the url just before starting the play. I've read the discussion and don't get everything since I haven't looked at the code. But regarding the other way you are thinking of @ryanheise, I think that relying on the approaching end of the current source has drawbacks: it is assuming that we know what is the next audio source to be played. In my use case I don't: there is a playlist and by default all sources are played sequentially, but the user can also decide at any time what source to play. So if you are to trigger a new event that can be used to override the URL I think that should be just after the new source to play is set but before any actual handling of the audio is done. Does it make sense to you? |
I'm not sure if I understand exactly, the point in using |
@hacker1024 The proposal from @cooaer could fit for my use case because I don't need the web but I need it to work on both Android and iOS, and I don't compose audio. But I'd like to have a better understanding of its drawbacks, could you please elaborate more about these:
What kind of problem could it raise?
You mean streaming the actual audio source data? When you say inefficient, could it cause performance issues when playing the audio? Thank you |
It involves more CPU because every byte of data from the audio source is then going through an extra layer of redirection. So rather than just reading every byte from the source once and then buffering it, it reads it, then redirects it, then reads it again, then buffers it. (Speaking of which, it would be a good idea at some point to move the proxy code into an isolate.) But you don't need to do any of that to implement this. A couple of solutions have been mentioned above that can be implemented in Dart without plugin support, and which don't involve adding that extra layer of streaming data. |
Just wanted to chime in and ask if this really is how it should work? We have a buffer duration after all, so ideally the next indexed source should be resolved once it needs to be buffered, no matter how far along the currently playing source is, right? |
You are correct in principle, although the goal of this use case is to also make it buffer the next item as late as possible, so you will probably also want to pass in the appropriate load control parameters into the constructor for iOS and Android to influence the buffering behaviour (although keep in mind that iOS doesn't seem to be very cooperative to these settings). |
The reason we've been using the |
I still don't understand your use case and how you can't know what is to play next just before it's about to play. But in any case, it sounds like you have more advanced requirements than can be served by just audio background and you should be using audio service instead. There is no way to get a playlist in just audio background without coinciding with that gapless playback mechanism that you want to avoid. |
Now that the treadmill branch seems to be nearing merge, is there a better way to hook into the trigger event to resolve the audio URL (mentioned here)? |
Not really, the treadmill branch is only an intermediate solution. It triggers the loading of the next item as soon as the current item is enqueued, but what we want here is the delay loading even further until just before the next item is about to be played. |
Ok, thank you. I was hoping to use one of these resolving solutions but MappingAudioSource doesn't support iOS yet and the dart-only ResolvingAudioSource doesn't work with the visualizer. I'll try to dig into the source code to try to understand if there's any way to work around that (it seems visualizer currently only works when using AudioSource.uri and the dart-only ResolvingAudioSource isn't composable as mentioned above). |
Certainly I think a Dart-only solution is possible even today, by listening to the position stream and triggering your loading event when the position is near the end of the current item. If you need something today, that is certainly the way to go. |
This PR implements some of the ideas discussed in #777.
API additions
MappingAudioSource
, a new type ofAudioSource
, has been added. On supported platforms (more on that later), it allows the creation of anAudioSource
to be deferred until it is truly needed by the player (for preloading or playback purposes).This is useful for APIs that require a secondary request to retrieve an audio URL. For example:
Any type of
IndexedAudioSource
can be lazily created in this fashion.If there's a problem creating the
AudioSource
,null
can be returned. This causes an empty placeholder to be used, causing the player to move on to the next item (or stop).The availability of
MappingAudioSource
on the current platform can be checked with the staticMappingAudioSource.supportedOnCurrentPlatform
getter.Platform interface changes
The platform interface has been changed in a non-breaking manner.
MappingAudioSourceMessage
has been addedgetAudioSourceMessage
function as a constructor argument. This function should return anAudioSource
associated with an ID. It is an optional named argument, and will use a placeholder that throws when it is called if the real implementation is not provided.supportsMappingAudioSource
getter that is false by default.Implementation details
Deferred
AudioSource
creationThis feature has been implemented by adding new audio source players (
AudioSourcePlayer
on the Web,MediaSource
on Android) that create, and start using, an underlying audio source player when their prepare/load method is first called.The MethodChannel platform interface implementation has implemented a "createMappedAudioSourceSource" API to facilitate this. When given a
MappingAudioSource
ID, it creates the underlyingAudioSource
and returns its message representation.Placeholder media when
null
is returnedIf a problem is encountered creating the
AudioSource
in theMappingAudioSource
,null
should be returned. When this happens, an empty audio source is used. This is implemented with:SilenceMediaSource
on AndroidSupported platforms
As the typical implementation of this feature relies on asynchronous
load
/prepare
methods in platform audio player APIs, supporting it is not feasible on platforms like macOS and iOS.I have experimented with taking advantage of the proxy functionality to allow deferred URLs to be used in a similar way, but this isn't very useful at the moment as macOS (and I assume iOS) make a HTTP request as soon as the source is added, even if it's not meant to be played yet.