Understanding Sahi Pro's Architecture

Proxy-Based Web Driver

Sahi Pro operates as a proxy between the browser and the application under test. This allows it to inject scripts into the page, enabling powerful element identification and control across domains. However, it introduces dependency on browser settings, proxy configurations, and network permissions.

Dynamic Accessors and Object Identification

Sahi uses _set, _get, _click, and accessor APIs like _textbox(), _link(), and _image() for object interaction. Its dynamic nature is a double-edged sword: while flexible, poorly scoped accessors often result in false positives or missed elements.

Inbuilt Test Runner and Suite Management

Sahi Pro includes its own suite runner, report generator, and scheduling features. It supports both UI and headless execution. While convenient, managing large test repositories requires structured naming, modularization, and debugging strategies.

Common Issues in Sahi Pro Automation

1. Dynamic Element Failures

Applications with dynamic IDs or deeply nested DOMs often cause Sahi’s element accessors to break. Elements may appear present in the recorder but fail at runtime due to timing or scope mismatches.

// Example accessor that may break due to dynamic ID
_click(_textbox("input123"))  // Replace with better strategies like _near()

2. Synchronization and Timing Errors

Sahi auto-waits for elements, but custom AJAX loaders, animations, or chained events can introduce flakiness. Tests intermittently fail due to early action triggering before the DOM is stable.

3. Memory Bloat in Large Test Suites

Long test sessions, especially involving file uploads, multiple iframes, or Java applets, lead to memory consumption spikes. This may cause the browser or Sahi engine to hang or crash silently.

4. Inconsistent Behavior Across Browsers

While Sahi supports Chrome, Firefox, and Edge, inconsistencies arise due to differences in rendering and JavaScript execution. Tests that pass on one browser may fail on another, especially with React or Angular apps.

5. CI/CD Integration Failures

Integrating Sahi with Jenkins, Bamboo, or custom pipelines often fails due to improper command-line invocations, missing environment variables, or browser context issues in headless nodes.

Diagnostics and Debugging Techniques

Enable Verbose Logs

Sahi logs detailed info in the `userdata/logs` directory. Enable debug mode in `sahi.properties` by setting `log.level=debug`. Review `diagnostics.log` and `playback_logs.txt` for script failures.

# Inside sahi.properties
log.level=debug
diagnostics.log.max_size=20480

Use the Controller’s Variable Watch

Sahi Controller allows live inspection of variables during test playback. Use it to debug incorrect variable scoping or DOM manipulation mismatches.

Browser Console vs. Sahi Logs

Open the developer console (F12) alongside Sahi playback to check for JS errors or unexpected DOM changes. Many rendering issues that aren’t captured by Sahi will appear in the browser console.

Performance Monitoring

Use Windows Task Manager or Linux’s `top` and `free -m` commands to monitor Sahi memory usage. Sudden spikes often correlate with heavy test actions or memory leaks in applets.

Root Causes and Solutions

Improper Accessor Strategy

Replace hardcoded accessors with context-aware methods like `_near`, `_in`, or `_parentNode`. Avoid relying on fragile element IDs. Create reusable Accessor APIs for stable interactions.

Lack of Wait or Polling Logic

Insert `_wait()` or `_waitFor()` where automatic waits fail. Use conditional wait blocks to handle spinners or async transitions.

_waitFor(_textbox("Search"), 10000)

Script Modularization Deficit

Monolithic scripts increase memory usage and reduce reusability. Break down tests into reusable .sah include files and define setup/teardown logic in a standard harness.

Test Runner Overload

Running hundreds of test cases sequentially in a single session leads to browser exhaustion. Schedule parallel batches and restart browser sessions between modules.

Unsupported or Unstable CI Invocation

Use `testrunner.bat` or `testrunner.sh` with full paths and explicit `-suite`, `-browser`, and `-host` arguments. Avoid relying on UI interactions during CI runs.

Step-by-Step Remediation Plan

Step 1: Audit Test Accessors

Review all element locators and replace dynamic ones with resilient accessors. Use Sahi’s Object Spy to regenerate failing steps based on runtime visibility.

Step 2: Optimize Wait Strategy

Insert `_wait`, `_waitFor`, and polling logic where asynchronous behavior is observed. Ensure no hardcoded sleep statements are used.

Step 3: Modularize Scripts and Suites

Break large suites into multiple smaller suites by module or function. Use includes for shared utilities, headers, and navigations.

Step 4: Memory Leak Prevention

Restart browser after 30–50 test cases. Use `kill_browser()` at teardown. Monitor for unused variables or closures that hold DOM references.

Step 5: Harden CI/CD Integration

Test testrunner commands locally before adding to CI. Provide full environment variables. Use headless browsers with XVFB in Linux environments and simulate real screen dimensions.

Best Practices for Enterprise-Grade Stability

  • Prefer `_near` and `_in` for resilient access
  • Use reusable .sah includes to reduce code duplication
  • Enable log rotation and regular cleanup
  • Use `kill_browser()` and session restarts to avoid memory leaks
  • Integrate Sahi with test management tools (e.g., TestRail)

Conclusion

Sahi Pro is a mature and flexible test automation platform well-suited for complex enterprise applications. However, its strength in flexibility can become a weakness if not harnessed with best practices and architectural discipline. Issues like flaky tests, synchronization failures, and CI instability are often symptoms of poor test design, insufficient logging, or environment mismatch. By understanding Sahi’s architecture, leveraging its debugging capabilities, and applying modular test development strategies, QA leaders and automation engineers can create robust, scalable, and maintainable automated test suites. In regulated or high-compliance industries, Sahi Pro’s auditability and control make it a powerful ally—when wielded correctly.

FAQs

1. Why does Sahi fail to find elements that are visible in the browser?

This usually happens due to dynamic IDs or improper scoping. Use `_near`, `_in`, or relative accessors instead of absolute IDs.

2. How can I prevent memory leaks during long test runs?

Restart the browser periodically, destroy unused variables, and modularize your test suites. Use `kill_browser()` in teardown blocks.

3. What causes Sahi to behave differently in Jenkins compared to local runs?

Differences in screen resolution, proxy settings, and headless environments can affect test behavior. Simulate user-like environments in CI with XVFB or dummy displays.

4. How do I debug flaky tests in Sahi?

Enable debug logs, inspect element visibility and network timings, and use `_waitFor()` to stabilize test execution against async behavior.

5. Can Sahi Pro test React or Angular applications?

Yes, but you must handle dynamic rendering carefully. Use resilient accessors and wait strategies to deal with virtual DOM updates and route transitions.