-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
issue:4073128 Adding Top 10 events and event bursts #253
Conversation
|
||
if args.interactive: | ||
import IPython | ||
|
||
IPython.embed() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved it here, so when using this "debug"/"Advanced" mode, the user can still see all the temp files before they are deleted
for critical_event in critical_events_burst: | ||
timestamp = critical_event['timestamp'] | ||
event_type = critical_event['event_type'] | ||
event = critical_event['event'] | ||
counter = critical_event['count'] | ||
event_text = f"{timestamp} {event_type} {event} {counter}" | ||
critical_events_text = critical_events_text + os.linesep + event_text | ||
|
||
text = text + os.linesep + "More than 5 events burst over a minute:" \ | ||
+ os.linesep + critical_events_text | ||
pdf = PDFCreator(pdf_path, pdf_header, png_images, text) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All this text has become a code smell I plan to fix in another PR.
It should be moved inside each analyzer, like the image creation
def analyze_events(self): | ||
grouped = self._log_data_sorted.groupby(["object_id", "event"]) | ||
event_counts_df = grouped.size().reset_index(name="count") | ||
event_counts_df = event_counts_df.sort_values( | ||
["event", "count"], ascending=False | ||
) | ||
self._log_data_sorted[["device", "description"]] = self._log_data_sorted.apply( | ||
self._split_switch_object_id, axis=1 | ||
) | ||
event_counts = ( | ||
self._log_data_sorted.groupby(["event", "device", "description"]) | ||
.size() | ||
.reset_index(name="count") | ||
) | ||
log.LOGGER.debug(event_counts.head()) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed since it is not used
def get_events_by_log_level_and_event_types_as_count(self, log_level="CRITICAL"): | ||
if log_level not in self._supported_log_levels: | ||
log.LOGGER.error( | ||
f"Requested log level {log_level} is not valid, " | ||
f"options are {self._supported_log_levels}" | ||
) | ||
return None | ||
events_by_log_level = self.get_events_by_log_level(log_level) | ||
return events_by_log_level["event_type"].value_counts() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed since it is not used
What
Adding top 10 events per time and event bursts.
Why ?
More data to the clients
How ?
Analyzing the events logs and adding the info
Testing ?
Manually ran e2e and saw the data; also played with the number of events and made sure it worked.
Special triggers
Use the following phrases as comments to trigger different runs
bot:retest
rerun Jenkins CI (to rerun GitHub CI, use "Checks" tab on PR page and rerun all jobs)bot:upgrade
run additional update tests