Galantis uses Laravel Nightwatch as its application-level error tracking and log explorer. When something fails — a message does not send, an automation does not enroll a customer, a webhook is not processed — Nightwatch contains the structured trace that shows exactly what happened, where it failed, and why. Understanding how to use Nightwatch is the most effective self-service diagnostic skill for Galantis. Most issues that are not resolved by the troubleshooting guides can be diagnosed by tracing the relevant event in Nightwatch and identifying the specific failing step.Documentation Index
Fetch the complete documentation index at: https://docs.digifist.com/llms.txt
Use this file to discover all available pages before exploring further.
What this covers
- What Nightwatch captures and how it is structured
- How to find a relevant log entry
- Reading the log tree to trace a request chain
- What information to include when contacting support
- Common log patterns for frequent issue types
What Nightwatch captures
Nightwatch captures structured traces per request, job, and event. Each trace includes:- Timestamp — when the operation occurred
- Tenant context — which workspace the operation belongs to
- Operation type — HTTP request, queue job, domain event, listener
- Job payload — the data passed to the job (e.g., the Shopify webhook payload that triggered it)
- Stack trace — the full error trace if an exception occurred
- Timing data — how long each step took
- Downstream context — child operations spawned from the parent (e.g., a webhook job that fired a domain event that enqueued an automation job)
How to find a relevant log entry
Navigate to Nightwatch via your Galantis dashboard. Use the following search approaches depending on what you are investigating:- By timestamp
- By error message
- By tenant ID
If you know approximately when an issue occurred, search by timestamp range. Narrow to a 5–10 minute window around the expected event time to limit results to the relevant period.This is the most reliable approach when you know when something should have happened — for example, “a campaign was launched at 14:30 and messages are not showing as delivered.”
Reading the log tree
Each log entry expands into a tree structure showing the full execution chain. The pattern to follow is:- Start at the root — the incoming webhook or the scheduled job that triggered the chain
- Expand the first level — the handler job that processed the webhook
- Look for exceptions or error markers at each level — they appear as distinct entries with stack traces
- Follow the chain to the level where execution stopped or produced an unexpected result
- The failing step’s entry will contain the specific error message, the data being processed, and the stack trace
Reproducing issues using Nightwatch
Reproduce the issue
Perform the action that is failing in the affected workspace — send a campaign, trigger an automation, save widget settings, restock a product. This creates a fresh trace in Nightwatch for the specific failing event.
Open Nightwatch and search
Navigate to Nightwatch immediately after reproducing. Search by timestamp (the current time), error message if one was visible, or tenant ID. Locate the trace for the operation you just performed.
Expand the log tree
Open the trace and expand the log tree. Follow the execution chain from the root operation down to the failing step.
Identify the failure
Find the first error marker in the tree. Expand it to read the full error message, the operation context (which job, which record, which API call), and the stack trace.
Determine resolution or escalate
Use the error context to determine whether the issue is resolvable — for example, a
CUSTOMER_IS_MISSING_CALLING_CODE error points to a data quality issue in Shopify that can be fixed by updating the customer record. If the error indicates an unexpected platform failure — an internal exception with no clear resolution path — copy the Nightwatch log URL and include it in your support ticket.Common log patterns
Understanding what a healthy trace looks like — and what a failing one looks like — for the most common operations helps you diagnose issues faster.Campaign message send failure
A healthy campaign send trace showsSendCampaignMessagesBatchJob completing with all messages reaching SENT status. A failed trace shows either:
- Pre-dispatch failure — the job failed before sending any messages. Look for errors in the batch preparation step — consent filtering, credit check, or template validation.
- Per-message failure — the job dispatched messages but some returned error responses from the Meta API. The error is logged per message with the Meta API error code and description. Common codes:
131047(outside 24-hour window without template),132001(template not approved),131026(recipient phone number invalid).
Automation not enrolling a customer
A trace for a customer who should have been enrolled but was not will show either:- Trigger did not fire — the root event (e.g.,
OrderCreated) is in the log butAbandonedCheckoutTriggeror the relevant trigger event shows no evaluation for this customer. Check whether the webhook was received and processed correctly. - Eligibility check failed — the trigger fired but the customer was blocked by a consent check, frequency cap, or exclusion rule. The enrollment job will show a
SKIPPEDresult with the specific check that blocked enrollment. - No trace at all — the webhook may not have been received. Check the Shopify webhook delivery log in Shopify Admin to confirm the webhook fired and received a
200response from Galantis.
Webhook processing failure
A webhook handler job that failed will show an exception at the job execution level. Common causes:- Data shape mismatch — the webhook payload has an unexpected structure (can happen after Shopify API version changes)
- Database write failure — a constraint violation or connection issue during the record update
- Downstream API failure — a Meta API call triggered by the webhook (e.g., a catalog sync triggered by
products/update) returned an error
What to include in a support ticket
When escalating to support after consulting Nightwatch:- Nightwatch log URL — a direct link to the trace for the failing operation. This is the single most valuable piece of context you can provide.
- Tenant workspace URL — confirms which workspace the issue affects
- Timestamp of the failing event — helps support find the relevant trace if the URL is not available
- Expected behavior — what should have happened
- Actual behavior — what happened instead
- Steps to reproduce — the exact actions that trigger the issue reliably
Related guides
- Support Index — Contact methods, response times, and escalation process
- Troubleshooting — Message Delivery — Common delivery errors and their resolution before needing Nightwatch
- Automations — Activity Tracking — Per-customer automation execution logs as a first diagnostic step before Nightwatch