Understanding Digital Testing Terminology, QA to UAT to E2E

2021.11.13

Home > Articles > Understanding Digital Testing Terminology, QA to UAT to E2E

If you work in the digital space you may have been responsible for or heard some common testing terminology. More often than not, the most common business testing is User Acceptance Testing (UAT) in which a business owner of a specific function does a test so they can confirm the completion of that focus area. We tend to generalize our testing when we speak about it in terms of “QA” or “UAT,” but there are many more types that are categorized based on the level of detail or the focus of the testing.

Testing Definitions

  1. Unit

Unit testing is typically done by the developer, testing for each class or object is best practice (Junit, Respect etc.)

  1. Smoke

Smoke testing is done by the tester, picking a few test cases to make sure the build delivered to QA can be used for a larger testing effort.

  1. Functional (Cross Browsing)

Confirm each individual system works as expected in itself as described in a functional specification document (FSD).

  1. SIT, Integration

Testing between systems, i.e. data between and processing/transformation.

  1. Data Validation

Confirming that the data elements and data tables are correct after migration or a larger round of processes. This does not test the process itself but the result of those processes in data.

  1. E2E, End to End

Testing customer experience and associated backend systems to complete the process.

  1. Regression

Re-test after bugs and issues have been resolved to confirm the solution and insure no unintended impacts.

  1. UAT, User Acceptance Testing

Testing completed by end users validating business processes and customer use.

  1. BETA UAT, Limited First-Round User Acceptance Testing (also BETA)

This version of testing happens earlier in the project if there is a phased/sprint approach needed to launch the project.  It often tests the first iteration of a tool / process with the understanding that there may be many bugs that will be fixed in future iterations.

  1. Load / Performance / Stress

Testing the limits of the system’s capacity, responsiveness, stability under load (peak and duration).

  1. Security / Vulnerability / Penetration

Testing to review basic website weaknesses and exposure to common threats.

  1. Cutover

Testing all steps to be executed in the transition from the old system to the new prior to go-live to ensure all tasks, timing, and resources are confirmed in an environment not impacting current production.

  1. Accessibility

Testing that reviews compatibility in relation to the ADA (American Disability Act) in which the website should have alternate ways to interact with a website for individuals that use varying view-types or interaction tools, i.e. interaction with a different screen resolution, audio dictation, or non-mouse based interaction.

  1. Backward Compatibility

If there is a legacy system or version requirement, this type of testing ensures that any additions to suit a new need, system, or purpose do NOT impact the current functionality of the legacy system(s) or version(s).

Standard Testing Types

  1. Manual Testing

Manual testing is usually done by an individual or group of individuals in which each will personally take each step in a test case, review the expected and actual results, and report on their findings.

  1. Automated Testing

Automated testing uses a variety of tools, depending on the system, to follow a set of rules or instructions to have a digital device, interface, or computer run a group of predetermined tests and report on the success or failure of the results without human interaction.

Common Pitfalls & Solutions from a PRO

1. UAT starts in the end – Though traditionally this is true, this is not applicable in today’s fast changing environments. Customers expect faster change, stakeholders expect to see changes sooner, executives are anxious about timing and delivery effectiveness. UAT needs to be an iterative process, the work-in-progress samples need to be shared with business stakeholders and effective communication is paramount, so business users are well informed that they are looking at a changing/early version of the product/feature. The objective of an iterative UAT is to get early feedback from business, so changes can be made/adjusted.

2. There is no end to testing – Testing of e-commerce applications is centered around an ‘economically viable’ model, what this means is, testing can be endless, there will be bugs, RE-testing and more bugs. It’s important to ensure testing ends gracefully. 2-3 rounds of full application testing is a good rule of thumb. Product and development must keep pace with testing and resolve issues as and when they are logged, DO NOT create a backlog of defects, we can never catch up on it. Product teams must prioritize defects as they are logged and development teams must resolve issues as they are logged.We move forward as one team/unit.

3. Shift left – Testing is all about making sure the product meets ‘customer expectations’ and it is extremely important for teams to engage testing teams early in the process and not just ramp up a team at the end. Testing needs to be continuous and constant through the life of a project, what it means is, shift left, bring testing early. So we don’t find major blockers towards the end of the project.

4. Customer obsessed testing – it’s time to modernize testing, testers are no longer testing specifications of the product, but testing it from a customer perspective. It’s important that testers share qualitative and quantitative information about the target customer. This helps testers think from a customer’s perspective rather than a system view. 

5. Automation is NOT testing – Testing is experimentation, since now automation is the new buzz word in the industry, more & more companies are trying to move to 100% automated testing, which in theory could work, but it does not work for the majority of organizations as they are not mature enough to fully adopt this change. Companies would need to carefully evaluate their own processes to make sure if this path is the right path and if so, make sure there is a clear path to do this. The right tools, the right people and overall the right mindset for people who will engage.

6. Testing team without automation is like a modern army without automatic rifles – there is only 40 hours of testing per person and adding more bodies to test repetitive work is a waste of human intelligence. Testers should be used for intelligent work/experimentation. The assembly line must be operated by machines not humans. Automation is key for any successful project or company. In e-commerce at least 50% of use cases must be automated. 

Dinesh Suryakumar, Senior Delivery Manager – Data Solutions, Perficient