when you do exploratory test follow below steps:
- dump xml for every new ui page by uiautomator dump, name as step1_dump.xml, step2_dump.xml, etc
- detect all clickable element
- if click successful, mark the click position with
magick input.png -fill "rgba(255,0,0,0.5)" -draw "circle 100,100 125,100" output.png - after finished the session. output a document
android_exploratory_test_report.mdwith use cases. each step contain screenshot with click area highlighted. - IMPORTANT: ALL screenshots in android_exploratory_test_report.md MUST use
tag format with width=“120”
- sample markdown step:
| |
when you generate function test script follow below steps:
read android_exploratory_test_report.md, and understand test cases
create
tests_suitefolder with standard pytest structure:conftest.pyfor shared fixtures and setuptest_*.pyfiles for each test case__init__.pyto make it a package
IMPORTANT: use consistent naming convention:
- test files:
test_[feature_name].py - test functions:
test_[action_description]() - fixtures:
setup_app(),teardown_app()
- test files:
MANDATORY: add checkpoint after every action using
assert exists()orwait()structure each test function as:
1 2 3 4 5 6 7 8 9 10 11 12 13 14def test_feature_name(): # setup start_app(package_name) # action 1 touch(element) assert exists(expected_element), "checkpoint failed" # action 2 touch(next_element) wait(next_expected_element, timeout=10) # cleanup stop_app(package_name)generate
pytest.iniconfiguration filerun
pytest tests_suite/to validate all tests
when you fix a existing test script, follow below steps:
- navigate to the specific UI before the step which not work
- try to do exploratory on the UI, every step do screenshot and element dump.
- after the exploratory verify if it is working.
- fix the test script.
Install Project Rule
Add this rule to your project's context:
1. Download to project rules:
mkdir -p .amazonq/rules && curl -o .amazonq/rules/mobile-testing-automation.md https://promptz.dev/rules/mobile/mobile-testing-automation/index.md