Updated October 2025 with real-world samples, modern tool perspectives, and practical examples from actual projects.
Welcome back to Week 2 of our “Tester to Lead” series. In the previous post, we talked about transitioning from manual to automation testing. Now let’s get practical: which tools should you actually learn, what frameworks make sense, and what separates useful automation from brittle, time-wasting scripts.
If you’ve been putting off learning automation because the options feel overwhelming, this guide will cut through the noise. Need help deciding if automation is even right for your situation? Check our complete manual to automation skills guide.

Why Automation Matters (But Isn’t Everything)
Let’s be honest: I didn’t learn automation because I was passionate about scripting. I learned it because I wanted a title bump and a salary raise. Welcome to reality.
But over time, I found that automation works when it supports QA—not when it tries to replace it.
Speed Without the Repetition
Automated tests run while you sleep. They catch regressions before your morning coffee. They let you focus on the interesting stuff—edge cases, usability issues, the weird bugs that only show up when users do something completely unexpected.
Like that time I found a payment bug where hitting the back button at a specific moment during checkout charged users twice. Every automated test missed it because they only tested the happy path. Manual testing caught it because curiosity isn’t something you can script.
Feedback That Protects Builds
When automation hooks into your CI/CD pipeline, broken builds get flagged immediately. Developers can fix issues while the context is still fresh, not three days later when they’ve moved on to something else.
But here’s the catch: if your automation fails daily with false positives, your team stops trusting it. That failing test suite becomes background noise—and that’s worse than having no tests at all.
Scaling Without Burning Out
Your app grows. Features multiply. Browsers update. Mobile devices proliferate. Manual testing alone can’t keep up. Good automation gives you coverage across platforms without burning out your team.
The key phrase here is “good automation.” Bad automation is worse than no automation. If you’re automating every test case just to feel productive, you’ve just invented flaky hell.
Choosing Tools That Actually Work
Here’s the dirty secret: most automation tools work fine. The real question is which one fits your team, your tech stack, and your actual needs. I kept my workflow simple by focusing on specific tools instead of chasing every new framework.
Web UI Testing: What I Actually Use
Playwright – This is where I landed after years of trying different tools. I’m not a formally trained automation engineer, I came from real-world QA. Playwright worked for me because it’s fast, stable, and doesn’t require a DevOps degree to set up.
I run everything on Ubuntu via WSL because Windows filesystem refused to play nice. Scripts live in Ubuntu, get pushed to GitHub, and teammates can fork or build on them. Simple. Stable. Shareable.
One warning: codegen generates garbage. It records every click, every hover, every accidental timeout. You’ll get 47 lines of code to navigate three pages, half of it bloat you’ll never use. Codegen is a starting point, not a finished test. If you don’t refactor what it generates, you’re building on sand.
Cypress – Still solid for JavaScript-heavy front ends. Cypress runs in the same loop as your app, which means fast feedback but some limitations with cross-origin stuff. Good for API interception, shadow DOM handling, and accessibility testing with cypress-axe. The QA Testing Playground at playground.qajourney.net has working examples you can test against.
Selenium WebDriver – The old reliable. When I first started exploring automation, Selenium IDE was my gateway. For a while, I thought that was all automation was—something like AutoHotkey or iMacros. It felt more like scripting macros than writing tests. But it gave me confidence and a visual way to understand locators.
Selenium supports every language, every browser, every platform. Also the most verbose and occasionally frustrating. But if your org already uses it, our Selenium guide will help you write better tests without needing CI/CD from day one.
API Testing: Skip the UI Entirely
Testing through the UI is slow. Testing APIs is fast. For CRUD operations, auth flows, and data validation, hit the APIs directly.
Postman/Newman – Visual interface for exploring APIs, CLI for running tests in CI. Dead simple to get started. I use this for quick endpoint validation and smoke tests.
REST Assured (Java) or Axios/SuperTest (JavaScript) – When you want everything in code. Better for complex scenarios and integration with your existing test framework.
Mobile: Still Complicated
Appium – Works for both iOS and Android. Uses WebDriver protocol, so if you know Selenium, you know Appium. Setup can be a pain, but once it’s running, it’s reliable enough.
Native frameworks (Espresso/XCUITest) – Faster and more stable than Appium, but you’re writing separate tests for each platform. Only worth it if you have dedicated mobile QA.
Real Talk: Pick Your Tools and Stick With Them
I’ve seen teams waste months chasing the “perfect” tool. Here’s what actually works: pick Playwright for web, Postman for APIs, and Appium for mobile if you need it. Learn them deeply. Don’t switch unless you have a damn good reason.
Tool fatigue is real. Mastery beats novelty.
What to Automate (And What to Skip)
Not everything should be automated. As a QA Lead, I don’t force my team to learn automation. I teach them to think critically about when it actually makes sense.
My Simple Rule:
- If something is repetitive, stable, and annoying to test more than twice—automate it
- If something needs human intuition or judgment—test it manually
- If it’s still changing weekly—don’t waste time automating yet
I don’t automate bleeding-edge features. I don’t automate flows that change constantly. And I don’t automate just to say we’re doing automation.
High ROI Automation:
- Smoke tests – Critical path validation that proves basic functionality works
- Regression checks on staging – When I feel too lazy to test manually, automation picks up the slack
- Data-heavy scenarios – Form validation across locales, currencies, or datasets
- Integration stability – Third-party APIs, microservices, database connections
Low ROI Automation:
- Features still in active development with changing requirements
- Rapidly evolving UI with frequent design updates
- One-time data migrations or setup tasks
- Edge cases that occur so rarely maintenance isn’t worth it
For a deeper look at this balance, read manual vs automation from a QA lead’s perspective. The best QA teams do both, strategically.
If you’re looking to level up your manual testing with modern tools, check out how AI can assist your manual testing workflow without replacing critical human judgment.
Setting Up Your First Automation Project
Start Simple
Here’s your basic structure:
/tests
/web
/api
/mobile
/pages (or /models)
/config
.env.dev
.env.staging
package.jsonUse Git. Write a README. Make it easy for the next person (probably you in six months) to understand what’s happening.
I keep my scripts in GitHub so teammates can fork, learn from, or build on them. Initially I ran everything on Windows—that didn’t last. WSL and the Windows filesystem refused to play nice. So I switched to full Ubuntu on WSL, Node installed, Playwright set up clean.
Your First Test Should Be Boring
Don’t start with “test the entire checkout flow including payment processing.” Start with:
- Can a user log in?
- Does the health check endpoint return 200?
- Does the home page load?
Boring tests teach you the framework without drowning you in complexity. Once boring works, add interesting.
Manual First, Automate Once Stable
New feature hits staging? Test it manually first to confirm it meets acceptance criteria. This validates the feature is worth automating and helps you understand it well enough to write meaningful tests.
After initial bugs are fixed and the design solidifies, then build automation. You’re not chasing moving targets.
Hook It Into CI When Ready
The moment your first test passes locally, consider getting it running in CI. Jenkins, GitHub Actions, GitLab CI—doesn’t matter. I’m working on this myself with the qajourney-automation-lab repo targeting playground.qajourney.net.
I’m not there yet on full CI/CD—but I’m getting there. You don’t need to be a DevOps wizard to add value with automation.
Writing Tests That Don’t Break Constantly
Stable Selectors or Nothing
After codegen records your test, audit every selector before you commit it.
Bad selector from codegen:
javascript
await page.click('div > button:nth-child(3)');What happens: Designer changes button order. Everything breaks.
Good selector:
javascript
await page.click('[data-testid="login-button"]');If your developers haven’t added data-testids, ask them to. It’s the single most important thing you can do for test stability. If they say “it’s not production code,” remind them it’s testing infrastructure—exactly as important.
When working with frameworks like Bootstrap, Tailwind, or custom code, selector strategy matters:
- Bootstrap tends to offer consistent class names and structures, making CSS selectors easier
- Tailwind CSS uses long utility-first class names that can make locators noisy or fragile
- Custom code (especially poorly structured) leads to spaghetti HTML where even good XPath breaks
Coordinate with your UX designer and senior dev when reviewing UI from junior engineers. Sloppy markup sneaks through code reviews—your automation can be the safety net.
Extract Test Data
Don’t hardcode values:
javascript
// Bad
await page.fill('[data-testid="search"]', 'iPhone 15 Pro Max');
// Good
async function searchProduct(page, productName) {
await page.fill('[data-testid="search"]', productName);
await page.click('[data-testid="search-button"]');
}Now you test multiple scenarios with the same code. When test data changes, you update it once.
Keep Tests Focused
Each test should validate one thing. Tests that verify 12 different behaviors are impossible to debug. When they fail, you have no idea which of those 12 things broke.
Remove the Fluff
Codegen records everything—including your mistakes. After it finishes, go through line by line:
- Remove unnecessary waits (Playwright has smart built-in waits)
- Remove accidental hovers or clicks
- Add actual assertions that validate outcomes
Clean version:
javascript
await page.fill('[data-testid="search"]', 'test');
await page.click('[data-testid="search-button"]');
await expect(page.locator('[data-testid="results"]')).toHaveCount(5);Now the test validates something, not just “did this sequence of clicks work.”
Break Monolithic Tests Into Functions
Don’t write one massive test that does login, search, add to cart, checkout, and payment. If payment fails, you wasted time verifying login and search—you already know those work.
Create reusable flows:
javascript
// flows.js
export async function loginAs(page, email, password) {
await page.goto('https://playground.qajourney.net/login');
await page.fill('[data-testid="email"]', email);
await page.fill('[data-testid="password"]', password);
await page.click('[data-testid="login-button"]');
}
// test.js
import { loginAs } from './flows.js';
test('checkout flow', async ({ page }) => {
await loginAs(page, '[email protected]', 'password');
// rest of test
});Now you can test individual flows and debug faster.
Handling Flakiness Like Your Career Depends On It
Flaky tests destroy trust. When tests randomly fail, people stop believing in automation. They stop checking results. Your entire investment becomes worthless.
I didn’t write scripts for every PR—that’s suicide unless you like debugging flaky tests every day. Instead, I used automation for what mattered: UI regression checks on staging.
Common Causes:
- Race conditions – Use proper waits, not sleep statements
- Test interdependence – Each test should work in isolation
- Fragile selectors – Prefer data-testid over brittle XPath
- Environment issues – Clean up test data, use fresh states
Fix flaky tests immediately. Don’t let them accumulate. A suite with 5 stable tests is better than 50 tests where 10 randomly fail.
Building a Strategy That Scales
Start With Smoke Tests
Cover the critical path first:
- Can users log in?
- Can they view their data?
- Can they complete core actions?
These run on every build. If they fail, everything stops.
Add Regression Tests Gradually
After smoke tests stabilize, expand to:
- Common user workflows
- Previously found bugs (regression prevention)
- High-risk features
Don’t try to automate everything at once.
Review and Prune Regularly
Old tests for removed features waste time. Outdated tests create confusion. Every quarter, audit your suite:
- Are these tests still relevant?
- Do they provide value?
- Are they testing the right things?
Delete ruthlessly.
The Reality Check
Test automation is powerful, but it’s not magic. You’ll write tests that break when UI changes. You’ll deal with flaky failures. You’ll wonder if it’s worth the effort.
It is. But only if you:
- Start small
- Focus on value
- Maintain aggressively
- Combine with manual testing
Automation doesn’t replace thinking. It amplifies it. As we covered in manual testing skills that make you better at automation, the best automation engineers are exceptional manual testers first not because they spent years clicking buttons, but because manual testing taught them what actually needs testing.
What’s Next
On Wednesday, we’re diving into the soft skills that separate good QAs from great ones—communication that actually gets things done and analytical thinking that finds problems before they ship.
Until then, pick a tool, write a test, break something, fix it, and learn.



2 thoughts on “Test Automation Essentials: A Beginner’s Guide to Tools and Frameworks”
Comments are closed.