Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SOAR-18543] Palo Alto Cortex XDR #3040

Open
wants to merge 5 commits into
base: develop
Choose a base branch
from

Conversation

ablakley-r7
Copy link
Collaborator

Proposed Changes

Description

Describe the proposed changes:

  • Update query start and end time logic to utilise end time of query in non pagination runs. This is to prevent duplicate events being continually processed and raised as new events.
  • Update error handling to return error data in response
  • Update unit tests for new pagination logic
  • Update custom config to read similarly to other plugins

PR Requirements

Developers, verify you have completed the following items by checking them off:

Testing

Unit Tests

Review our documentation on generating and writing plugin unit tests

  • Unit tests written for any new or updated code

In-Product Tests

If you are an InsightConnect customer or have access to an InsightConnect instance, the following in-product tests should be done:

  • Screenshot of job output with the plugin changes
  • Screenshot of the changed connection, actions, or triggers input within the InsightConnect workflow builder

Style

Review the style guide

  • For dependencies, pin OS package and Python package versions
  • For security, set least privileged account with USER nobody in the Dockerfile when possible
  • For size, use the slim SDK images when possible: rapid7/insightconnect-python-3-38-slim-plugin:{sdk-version-num} and rapid7/insightconnect-python-3-38-plugin:{sdk-version-num}
  • For error handling, use of PluginException and ConnectionTestException
  • For logging, use self.logger
  • For docs, use changelog style
  • For docs, validate markdown with insight-plugin validate which calls icon_validate to lint help.md

Functional Checklist

  • Work fully completed
  • Functional
    • Any new actions/triggers include JSON test files in the tests/ directory created with insight-plugin samples
    • Tests should all pass unless it's a negative test. Negative tests have a naming convention of tests/$action_bad.json
    • Unsuccessful tests should fail by raising an exception causing the plugin to die and an object should be returned on successful test
    • Add functioning test results to PR, sanitize any output if necessary
      • Single action/trigger insight-plugin run -T tests/example.json --debug --jq
      • All actions/triggers shortcut insight-plugin run -T all --debug --jq (use PR format at end)
    • Add functioning run results to PR, sanitize any output if necessary
      • Single action/trigger insight-plugin run -R tests/example.json --debug --jq
      • All actions/triggers shortcut insight-plugin run --debug --jq (use PR format at end)

Assessment

You must validate your work to reviewers:

  1. Run insight-plugin validate and make sure everything passes
  2. Run the assessment tool: insight-plugin run -A. For single action validation: insight-plugin run tests/{file}.json -A
  3. Copy (insight-plugin ... | pbcopy) and paste the output in a new post on this PR
  4. Add required screenshots from the In-Product Tests section

@joneill-r7 joneill-r7 requested a review from a team as a code owner January 10, 2025 11:57
@ablakley-r7 ablakley-r7 force-pushed the soar-18543_palo_alto_cortex_xdr branch from 82083d8 to 6c7ef56 Compare January 10, 2025 12:01
…hange how custom config is named in line with other plugins | Update SDK | Update error handling to return response data in data field
@ablakley-r7 ablakley-r7 force-pushed the soar-18543_palo_alto_cortex_xdr branch from d275283 to a26e0fa Compare January 14, 2025 07:45
@@ -502,20 +511,24 @@ def build_request(self, url: str, headers: dict, post_body: dict) -> Response:
request = requests.Request(method="post", url=url, headers=headers, json=post_body)

custom_config_exceptions = {
HTTPStatusCodes.BAD_REQUEST: PluginException(cause="API Error. ", assistance="Bad request, invalid JSON."),
HTTPStatusCodes.BAD_REQUEST: PluginException(
cause=PluginException.causes.get(PluginException.Preset.BAD_REQUEST),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for readability I think this does the same thing by just passing the preset and the exception class gets the cause?( but could be remembering wrong)

HTTPStatusCodes.BAD_REQUEST: PluginException(preset=PluginException.Preset.BAD_REQUEST, assistance="Bad request, invalid JSON.")

data=error.data.text,
status_code=error.data.status_code,
)
raise error
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks like it'll skip _handle_401 which looks like we tend run into an issue with keys expiring and a refresh gets us up and running again? Although not sure if this code is ever hit as it looks like they expire after 15 minutes

Comment on lines +215 to +217
if not start_time:
start_time = self.convert_datetime_to_unix(now_date_time - timedelta(hours=DEFAULT_LOOKBACK_HOURS))
end_time = now_unix
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this not already handled in get_query_time, and we always return a start_time so we never hit this bit of code?

self.logger.info("Adjusting start time to cutoff value")
start_time = max_lookback_unix
# Reset search_from and search_to if this is not a backfill
if not custom_config:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we've passed an alert limit into the custom config this could break this logic? is it better to use the lookback type key we've used in other cases?

"""
old_hashes = state.get(LAST_ALERT_HASH, [])
deduped_alerts = 0
new_alerts = []
new_hashes = []
highest_timestamp = state.get(LAST_ALERT_TIME, 0)
highest_timestamp = 0

# Create a new hash for every new alert
for _, alert in enumerate(alerts):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this was already existing,do we need to use enumerate here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants