Collect TeamViewer logs
This document explains how to ingest TeamViewer logs to Google Security Operations using Amazon S3. The parser extracts the audit events from JSON formatted logs. It iterates through event details, mapping specific properties to Unified Data Model (UDM) fields, handling participant and presenter information, and categorizing events based on user activity. The parser also performs data transformations, such as merging labels and converting timestamps to a standardized format.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance.
- Privileged access to TeamViewer.
- Privileged access to AWS (S3, Identity and Access Management (IAM), Lambda, EventBridge).
Get TeamViewer prerequisites
- Sign in to the TeamViewer Management Console as an administrator.
- Go to My Profile > Apps.
- Click Create app.
- Provide the following configuration details:
- App name: Enter a descriptive name (for example,
Google SecOps Integration
). - Description: Enter description for the app.
- Permissions: Select the permissions for audit log access.
- App name: Enter a descriptive name (for example,
- Click Create and save the generated API credentials in a secure location.
- Record your TeamViewer API Base URL (for example,
https://webapi.teamviewer.com/api/v1
). - Copy and save in a secure location the following details:
- CLIENT_ID
- CLIENT_SECRET
- API_BASE_URL
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucket following this user guide: Creating a bucket
- Save bucket Name and Region for future reference (for example,
teamviewer-logs
). - Create a User following this user guide: Creating an IAM user.
- Select the created User.
- Select Security credentials tab.
- Click Create Access Key in section Access Keys.
- Select Third-party service as Use case.
- Click Next.
- Optional: Add description tag.
- Click Create access key.
- Click Download CSV file to save the Access Key and Secret Access Key for future reference.
- Click Done.
- Select Permissions tab.
- Click Add permissions in section Permissions policies.
- Select Add permissions.
- Select Attach policies directly.
- Search for AmazonS3FullAccess policy.
- Select the policy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies.
- Click Create policy > JSON tab.
- Copy and paste the following policy.
Policy JSON (replace
teamviewer-logs
if you entered a different bucket name):{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowPutObjects", "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::teamviewer-logs/*" }, { "Sid": "AllowGetStateObject", "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::teamviewer-logs/teamviewer/audit/state.json" } ] }
Click Next > Create policy.
Go to IAM > Roles > Create role > AWS service > Lambda.
Attach the newly created policy.
Name the role
TeamViewerToS3Role
and click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
Provide the following configuration details:
Setting Value Name teamviewer_to_s3
Runtime Python 3.13 Architecture x86_64 Execution role TeamViewerToS3Role
After the function is created, open the Code tab, delete the stub and paste the following code (
teamviewer_to_s3.py
).#!/usr/bin/env python3 # Lambda: Pull TeamViewer audit logs and store raw JSON payloads to S3 # - Time window via {FROM}/{TO} placeholders (UTC ISO8601), URL-encoded. # - Preserves vendor-native JSON format for audit and session data. # - Retries with exponential backoff; unique S3 keys to avoid overwrites. import os, json, time, uuid, urllib.parse from urllib.request import Request, urlopen from urllib.error import URLError, HTTPError import boto3 S3_BUCKET = os.environ["S3_BUCKET"] S3_PREFIX = os.environ.get("S3_PREFIX", "teamviewer/audit/") STATE_KEY = os.environ.get("STATE_KEY", "teamviewer/audit/state.json") WINDOW_SEC = int(os.environ.get("WINDOW_SECONDS", "3600")) # default 1h HTTP_TIMEOUT= int(os.environ.get("HTTP_TIMEOUT", "60")) API_BASE_URL = os.environ["API_BASE_URL"] CLIENT_ID = os.environ["CLIENT_ID"] CLIENT_SECRET = os.environ["CLIENT_SECRET"] MAX_RETRIES = int(os.environ.get("MAX_RETRIES", "3")) USER_AGENT = os.environ.get("USER_AGENT", "teamviewer-to-s3/1.0") s3 = boto3.client("s3") def _load_state(): try: obj = s3.get_object(Bucket=S3_BUCKET, Key=STATE_KEY) return json.loads(obj["Body"].read()) except Exception: return {} def _save_state(st): s3.put_object( Bucket=S3_BUCKET, Key=STATE_KEY, Body=json.dumps(st, separators=(",", ":")).encode("utf-8"), ContentType="application/json", ) def _iso(ts: float) -> str: return time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime(ts)) def _get_access_token() -> str: # OAuth2 Client Credentials flow for TeamViewer API token_url = f"{API_BASE_URL.rstrip('/')}/oauth2/token" data = urllib.parse.urlencode({ 'grant_type': 'client_credentials', 'client_id': CLIENT_ID, 'client_secret': CLIENT_SECRET }).encode('utf-8') req = Request(token_url, data=data, method="POST") req.add_header("Content-Type", "application/x-www-form-urlencoded") req.add_header("User-Agent", USER_AGENT) with urlopen(req, timeout=HTTP_TIMEOUT) as r: response = json.loads(r.read()) return response["access_token"] def _build_audit_url(from_ts: float, to_ts: float, access_token: str) -> str: # Build URL for TeamViewer audit API endpoint base_endpoint = f"{API_BASE_URL.rstrip('/')}/reports/connections" params = { "from_date": _iso(from_ts), "to_date": _iso(to_ts) } query_string = urllib.parse.urlencode(params) return f"{base_endpoint}?{query_string}" def _fetch_audit_data(url: str, access_token: str) -> tuple[bytes, str]: attempt = 0 while True: req = Request(url, method="GET") req.add_header("User-Agent", USER_AGENT) req.add_header("Authorization", f"Bearer {access_token}") req.add_header("Accept", "application/json") try: with urlopen(req, timeout=HTTP_TIMEOUT) as r: return r.read(), (r.headers.get("Content-Type") or "application/json") except (HTTPError, URLError) as e: attempt += 1 print(f"HTTP error on attempt {attempt}: {e}") if attempt > MAX_RETRIES: raise # exponential backoff with jitter time.sleep(min(60, 2 ** attempt) + (time.time() % 1)) def _put_audit_data(blob: bytes, content_type: str, from_ts: float, to_ts: float) -> str: # Create unique S3 key for audit data ts_path = time.strftime("%Y/%m/%d", time.gmtime(to_ts)) uniq = f"{int(time.time()*1e6)}_{uuid.uuid4().hex[:8]}" key = f"{S3_PREFIX}{ts_path}/teamviewer_audit_{int(from_ts)}_{int(to_ts)}_{uniq}.json" s3.put_object( Bucket=S3_BUCKET, Key=key, Body=blob, ContentType=content_type, Metadata={ 'source': 'teamviewer-audit', 'from_timestamp': str(int(from_ts)), 'to_timestamp': str(int(to_ts)) } ) return key def lambda_handler(event=None, context=None): st = _load_state() now = time.time() from_ts = float(st.get("last_to_ts") or (now - WINDOW_SEC)) to_ts = now # Get OAuth2 access token access_token = _get_access_token() url = _build_audit_url(from_ts, to_ts, access_token) print(f"Fetching TeamViewer audit data from: {url}") blob, ctype = _fetch_audit_data(url, access_token) # Validate that we received valid JSON data try: audit_data = json.loads(blob) print(f"Successfully retrieved {len(audit_data.get('records', []))} audit records") except json.JSONDecodeError as e: print(f"Warning: Invalid JSON received: {e}") key = _put_audit_data(blob, ctype, from_ts, to_ts) st["last_to_ts"] = to_ts st["last_successful_run"] = now _save_state(st) return { "statusCode": 200, "body": { "success": True, "s3_key": key, "content_type": ctype, "from_timestamp": from_ts, "to_timestamp": to_ts } } if __name__ == "__main__": print(lambda_handler())
Go to Configuration > Environment variables.
Click Edit > Add new environment variable.
Enter the environment variables provided in the following table replacing the example values with your values.
Environment variables
Key Example value S3_BUCKET
teamviewer-logs
S3_PREFIX
teamviewer/audit/
STATE_KEY
teamviewer/audit/state.json
WINDOW_SECONDS
3600
HTTP_TIMEOUT
60
MAX_RETRIES
3
USER_AGENT
teamviewer-to-s3/1.0
API_BASE_URL
https://webapi.teamviewer.com/api/v1
CLIENT_ID
your-client-id
(from step 2)CLIENT_SECRET
your-client-secret
(from step 2)After the function is created, stay on its page (or open Lambda > Functions > your-function).
Select the Configuration tab.
In the General configuration panel click Edit.
Change Timeout to 5 minutes (300 seconds) and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate (
1 hour
). - Target: Your Lambda function
teamviewer_to_s3
. - Name:
teamviewer-audit-1h
.
- Recurring schedule: Rate (
- Click Create schedule.
(Optional) Create read-only IAM user and keys for Google SecOps
- Go to AWS Console > IAM > Users > Add users.
- Click Add users.
- Provide the following configuration details:
- User: Enter
secops-reader
. - Access type: Select Access key – Programmatic access.
- User: Enter
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
JSON:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:GetObject"], "Resource": "arn:aws:s3:::teamviewer-logs/*" }, { "Effect": "Allow", "Action": ["s3:ListBucket"], "Resource": "arn:aws:s3:::teamviewer-logs" } ] }
Name =
secops-reader-policy
.Click Create policy > search/select > Next > Add permissions.
Create access key for
secops-reader
: Security credentials > Access keys.Click Create access key.
Download the
CSV
. (You'll paste these values into the feed).
Configure a feed in Google SecOps to ingest TeamViewer logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed name field, enter a name for the feed (for example,
TeamViewer logs
). - Select Amazon S3 V2 as the Source type.
- Select TeamViewer as the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://teamviewer-logs/teamviewer/audit/
- Source deletion options: Select deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default is 180 days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace.
- Ingestion labels: The label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalize screen, and then click Submit.
UDM mapping table
Log field | UDM mapping | Logic |
---|---|---|
AffectedItem |
metadata.product_log_id |
The value of AffectedItem from the raw log is directly mapped to this UDM field. |
EventDetails.NewValue |
principal.resource.attribute.labels.value |
If PropertyName contains (server) , the NewValue is used as the value of a label in principal.resource.attribute.labels . |
EventDetails.NewValue |
principal.user.user_display_name |
If PropertyName is Name of participant , the NewValue is used as the user display name for the principal. |
EventDetails.NewValue |
principal.user.userid |
If PropertyName is ID of participant , the NewValue is used as the user ID for the principal. |
EventDetails.NewValue |
security_result.about.labels.value |
For all other PropertyName values (except those handled by specific conditions), the NewValue is used as the value of a label within the security_result.about.labels array. |
EventDetails.NewValue |
target.file.full_path |
If PropertyName is Source file , the NewValue is used as the full path for the target file. |
EventDetails.NewValue |
target.resource.attribute.labels.value |
If PropertyName contains (client) , the NewValue is used as the value of a label in target.resource.attribute.labels . |
EventDetails.NewValue |
target.user.user_display_name |
If PropertyName is Name of presenter , the NewValue is parsed. If it's an integer, it's discarded. Otherwise, it's used as the user display name for the target. |
EventDetails.NewValue |
target.user.userid |
If PropertyName is ID of presenter , the NewValue is used as the user ID for the target. |
EventDetails.PropertyName |
principal.resource.attribute.labels.key |
If PropertyName contains (server) , the PropertyName is used as the key of a label in principal.resource.attribute.labels . |
EventDetails.PropertyName |
security_result.about.labels.key |
For all other PropertyName values (except those handled by specific conditions), the PropertyName is used as the key of a label within the security_result.about.labels array. |
EventDetails.PropertyName |
target.resource.attribute.labels.key |
If PropertyName contains (client) , the PropertyName is used as the key of a label in target.resource.attribute.labels . |
EventName |
metadata.product_event_type |
The value of EventName from the raw log is directly mapped to this UDM field. |
Timestamp |
metadata.event_timestamp |
The value of Timestamp from the raw log is parsed and used as the event timestamp in the metadata. Set to USER_UNCATEGORIZED if src_user (derived from ID of participant ) is not empty, otherwise set to USER_RESOURCE_ACCESS . Hardcoded to TEAMVIEWER . Hardcoded to TEAMVIEWER . Hardcoded to TEAMVIEWER . |
Need more help? Get answers from Community members and Google SecOps professionals.