Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scalopus record script improvements #10

Open
3 tasks
efernandez opened this issue Oct 1, 2019 · 3 comments
Open
3 tasks

Scalopus record script improvements #10

efernandez opened this issue Oct 1, 2019 · 3 comments

Comments

@efernandez
Copy link
Contributor

Improvements for the record script contributed in #9:

  • -o filename for writing to a directly named file, detect .gz extension and zip if appropriate. It should still default to stdin.
  • Limit recording to number of events. (!) This might required some support from the backend.
  • Add logging for -v(v) that displays numbers of traces, etc. (!) Logging should probably be disabled if the script is writing to stdin.

@iwanders Any comments/corrections? This is based on your comment in #9 (comment)

@efernandez efernandez mentioned this issue Oct 1, 2019
@iwanders
Copy link
Owner

iwanders commented Oct 1, 2019

I think this captures it. We can do logging to stderr if we are writing the traces to stdout. That way we can still get info if piping it into a file.

We may need a 'streaming' mode in the native trace source to support limiting number of events and allow us to say, grep in the streaming output.

@efernandez
Copy link
Contributor Author

streaming | grep would be awesome!

@iwanders
Copy link
Owner

iwanders commented Oct 3, 2019

Yeah, we should be able to do that by moving parts of this function to another function that yields a vector of jsons from the internal collection container (recorded_data_) in this class.

One problem we will have is that, at the moment we just can get away with one getMapping call at the end of the recording interval, which is gauranteed to include all mappings that were used in this interval. When we do streaming we need to retrieve the mappings at the beginning, but maybe also in between if we encounter tracepoints for which we don't have a mapping. This is obviously less than ideal. We need to think a bit whether we want to support encountering new tracepoints while streaming, or whether we expect all tracepoints to have already been encountered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants