Community Webinar: Debugging SOAR Integrations in IDE Like it��s a VIBE
FLARE-On 12 Challenge Drops Sept 26 Complimentary Agentic SOC Workshops SecOps Professional Cert Exam
Got a question? Need an answer? Let's connect!
Q&A, discussions, share, network
Your learning hub for all things security
Join a local meetup!
Discover, RSVP, connect
Hi team, CrowdStrike recently announced the deprecation of the Detects API endpoints, and Google shared an advisory explaining the impact and what actions to take if you are using the data feed in SIEM ingestion. 👉 Advisory: Important Advisory – Decommissioning of CrowdStrike’s Detects API That part is clear for SIEM data feeds. But in our case, we use the CrowdStrike Detection Connector in Google SecOps SOAR, which creates cases from detections. Since the Detects API was decommissioned, the connector started failing with this error:Error executing connector: "Detection Connector". Reason: An error occurred: 404 Client Error: Not Found for url: https://api.us-2.crowdstrike.com/detects/queries/detects/v1?filter=status%3A%27new%27%2Bfirst_behavior%3A%3E%3D%272025-09-29T20%3A48%3A07%27&sort=first_behavior.asc&limit=100 b'{\n "meta": {\n "query_time": 0.00030819,\n "powered_by": "legacy-detects",\n "trace_id": "39d6ade4-b8d1-479d-821f-08849878a2b1"\n },\n "resources": [],\n "erro
Hi, I can't find the Data Retention tab in the SecOps SIEM Settings, even as an administrator. Where else can I check the retention interval for logs in the SIEM?
Hi Google SecOps Community,For folks who manage content via Google SecOps’ API using Content Manager, are there any additional features that would be useful for your security operations team? For example, the ability to manage the following content via the API and your CI/CD pipeline.Dashboards Saved Searches Curated Detections SOAR playbooks (and possibly other SOAR content)For reference, Content Manager is able to manage the following content via the API today:Rules Rule Exclusions Data Tables Reference Lists
Hi, im trying to configurate the new chronicle integration via Chronicle API because i want to add the “Data tables” actions to my playbooks, and only works in Chronicle API, not backstory legacy. So first of all, i created my service Account with some permissions like “getReferencesList.list” and others. Then i configured the API UI, and API Root according to the documentation:https://cloud.google.com/chronicle/docs/soar/marketplace-integrations/google-chroniclehttps://cloud.google.com/chronicle/docs/reference/rest?rep_location=eu API UI: https://INSTANCE.chronicle.security/API ROOT: https://chronicle.eu.rep.googleapis.com/v1alpha/projects/PROJECT_ID/locations/eu/instances/INSTANCE_ID.User's Service Account: that is the Json that i downloaded when i created the Service account. And im struggling over here, since i get the 400 bad request error al the time: 400 Client Error: Bad RequestUnable to connect to Google Chronicle, please validate your credentials: Request contains an invalid
Hello,I am working on automating rule creation and deployment in Google Chronicle using the secops Python SDK inside Siemplify jobs.I can successfully create rules and enable them (enable_rule works fine). However, I cannot enable alerting for the rules: The method set_rule_alerting does not exist in the ChronicleClient object. The method update_rule_deployment also does not exist. Both of them appear in the documentation/examples: update_rule_deployment implementation in secops-wrapper set_rule_alerting usage example But in my environment (chronicle object from SecOpsClient), they are not available when I run dir(chronicle).Below is my current script: from SiemplifyJob import SiemplifyJobfrom secops import SecOpsClientimport textwrapimport traceback# -------------------------# Global parameters# -------------------------RULE_ENABLED = True # enable/disable continuous executionRULE_ALERTING = True # define if detections generate alertsRUN_FREQUENCY = "LIVE" # not guaranteed
Hello! I am struggling with how to handle nested arrays in my parsers. I have been reviewing the following documentation but I still am unable to full wrap my head around how to make it all work. I have the following JSON log (its a lot longer but I just want to see how to start it) "resourceLogs": [ { "resource": { "attributes": [ { "key": "com.splunk.sourcetype", "value": { "stringValue": "<value here>" } }, { "key": "host.name", "value": { "stringValue": "<value here>" } }, { "key": "os.type", "value": { "stringValue": "linux" } } ] },And following the documentation I conjured up the following but I continue to run into generic errors, I havent include the “host.name” or the “os.type” yet because I wasnt able to get the “source_type” out of the log: filte
Hello Folks, I am running a google secops POC, so till now i have installed bindplane agent on Windows endpoints and connected their feed to google secops. The issue im facing is that im unable to collect the correct telemetry and also unable to collect the powershell telemetry data. I have these logs forwarded to google secops SIEM, but im unable to create a CASE which is basically SOAR. Ive tried using the chronicle connector but unable to push these alerts to soar and create a case. If possible please help me out to the earliest. Thank you
Hi All,In a manual case i have created entity username. I have executed Azure active directory - disable user accounts, revoke user sessions actions successfully. when i have executed Reset User Password action im getting output as “No user passwords were reset.”. Could you please assist
Hey, I want to output (with outcome section) the exact value for the repeated field (e.g. target.ip) that was found in the data table with IN operator.Is it even possible? Maybe you have some suggestions?Example rule:rule rule_name { meta: severity = "MEDIUM" events: $e.metadata.event_type = "NETWORK_CONNECTION" $e.target.ip = $target_ip $target_ip IN %dt.ioc outcome: $matched_value = ? // one of the IPs from target_ip that was actually found in data table condition: $e}
Hi,Does anybody have insights on how to handle compliance regulations such as GDPR with specific requirements to delete ingested logs after a specific time interval?In Europe there quite stringent regulation for data retention, and some metadata and log types are required to be deleted after some weeks or months.If my understanding is correct, SecOps has a flat retention of 1 year and there is no way to enforce data retention policies for specific log types or based on other parameters.Is this not going to be an issue for audits? Is there no current way to handle the matter? Thank you all,
Occasionally, we are experiencing an issue where a case is not created even though an alert is detected by a rule.When we contact support about this issue, we are usually told to update the Google Chronicle connector to the latest version. Updating often resolves the issue, but is it possible to create some kind of system that will automatically update when the latest version is released?If automatic updates are difficult, we would like to create a system that will notify us when an update is available.We look forward to your response.Best regards
I am building a Chronicle detection rule for Azure AD sign-in logs. I want to detect events where a user logs in from an IP address that has not been observed on the same resource within the past 14 days. Could you please help on this...
Helo folks,I’m observing behavior in Google SecOps that seems to deviate from what I expected based on the documentation. I want to see if others have seen it or have insight into what’s happening internally. Below is the scenario, observations, and things I’ve already verified. I’d appreciate thoughts, explanations, or pointers to relevant internal logic.I ingest a single IOC Entity (e.g. an IP or hash) via a parser, with its metadata and interval.start_time (and without interval.end_time).Right after ingestion, in Raw Log Search, I see exactly one entity record corresponding to that IOC, as expected. However, when I search using UDM Search over a larger time window (e.g. 1-2 Month), I see many entries (15+), each with different interval.start_time / interval.end_time, as if one entry per day.You could see on the below attached image, when i clicked on the any entry all of them are getting highlighted which means that all of them refer to the same single entity. This seems inconsisten
The leaderboard is currently empty. Contribute to the community to earn your spot!
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.