Skip to main content

Software testing methodologies 101


software testing methodologies
July 23, 2020

Not long after you join Workiva, you’ll start to hear people talk about “testing.” When you’re new, this sounds like a good idea, so you jump on board and start writing tests. Your dev lead responds with, "Those unit tests are great, but we need some integration tests too," and you think:

I wrote some tests, what more do you want from me?

Have no fear, we’re going to look at what they meant and why they asked.

Why do we spend time writing tests?

Our customers expect a high-quality product from us, due to the nature of the information they trust us to manage. Security must be our top concern when building new products, since the data is highly sensitive and in many cases regulated by government entities with large fines for mishandling.

At the same time, usability is also a high priority. Our customers love our products today, and we want that to continue. If users start finding bugs that prevent or impede them from accomplishing their work, they’ll quickly become disillusioned and want to use a different product. Do not underestimate the impact of bugs on users—one you perceive as minor may result in long delays in completing their tasks.

Consider how you choose your development tools: do you quickly abandon a new piece of software because it seems to add more work than it prevents, or do you dislike a tool in the development toolchain because it slows you down more than it provides value?

It’s extremely important to our continued success that we detect bugs or usability issues in our applications early, so we can make our customers' lives happier and easier.

When you start a new project, testing your changes is quite simple: run your application. As it grows in complexity, that simple statement becomes "run N scenarios," and often it continues to expand exponentially.

Frequently, a team then moves to a "regression suite," implemented as a spreadsheet of scenarios that should be done in the application to confirm it’s working as expected. This helps, since now you know what to test, and the expected result is documented. But, you have to do all of those tests by hand on every release. Are you prepared to manually check tens, hundreds, or thousands of scenarios in your application every time you release?

On every release? Forever?

Computers are exceptionally good at repetitive tasks, and we should take advantage of them whenever possible, because they’re accurate, faster, and cheaper than the hundreds of employees it will require to perform our tests manually.

Having people involved in testing is still immensely valuable, but you should take advantage of their strengths, like their power to answer questions that are difficult for computers. For example:

  • Is the user interface easy to understand?
  • Are there steps in this workflow that could be removed?
  • Does this change impact other parts of the application in a way that makes their design wrong or inefficient?

Try to utilize your people to answer these types of questions, and automate the repetitive ones. Over a period of time, your manual tests start to consume more of the team's time, eventually becoming a very expensive part of your operations. Trading a little development time now for less manual work in the future by automating tests is almost always worth the effort.

What's in a test?

Using different software testing methodologies to help us create high-quality products is an important part of our culture, but like the products we create, testing them is a complicated idea with its own terminology.

Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software bugs (errors or other defects).


The most common reason people want to test their applications is find and avoid bugs, but the scope can be much bigger:

  • How long does it take the application to perform a specific action?
  • How many concurrent processes can the application handle effectively?
  • Does the application have any security issues?
  • Does the application respond gracefully when an external service fails?

The list can get long and varies depending on the type of application you’re assembling. For example, if you are building a DockerTM container-based service that provides authentication services, you won’t be concerned about a user interface. However, a desktop application may have no remote services and only one user, so the user interface design is critical

These examples have a common theme: each test is attempting to answer a specific question.

What question are you answering?

There are many software testing methodologies available to you: unit, integration, functional, load, performance, UX, security, etc. The type of test you should use depends entirely on the question you’re attempting to answer. If you write a test without a clear question in mind, it often becomes too large, too slow, and difficult to maintain because you attempt to test everything at once. Instead, ask yourself the following first:

What aspect of the application are you checking, and how much of the application is required for the test to be valid?

Limiting yourself to only one question about your application allows you to limit your scope and target your test. A performance test will be highly concerned with how long the process takes, but it will not inspect the results of the operation in as much detail as other types of tests. Unit tests concern themselves with individual classes or methods, and anything outside is often mocked for speed and flexibility in introducing error cases.

By limiting your scope, you can write the smallest, fastest, understandable, and maintainable test to answer your question.

Types of software testing methodologies


Does this method or class behave as I intended?

The unit test is the smallest and usually the fastest type of test available to you. Creating a test that mocks out everything beyond your selected method or class allows you to quickly check error handling, invalid inputs, and observe your code's specific behaviors.


Do these classes/libraries/applications work together as I expect?

Integration tests are often the least agreed-upon definition. Some say they test multiple classes, others imply it's a type of final check to ensure your application works when all the pieces are assembled.

What those cases have in common is confirming that multiple pieces of software, regardless of their size, work together in the way you expected. Because the scope broadens with this question, these tests often cannot gain direct insight into databases or data models and have to inspect things being produced by APIs at various levels.


Does my application work as expected when a user interacts with it?

These names are often interchangeable, but there is still a variety in how the tests interact with the application. For example, end-to-end testing a database server is likely to be over a TCP connection without requiring a user interface. Conversely, an online scheduling application may require a lot of interaction with a user interface that stores its data in the user's browser cache and on a server.

It may feel like this level of tests requires special tooling to be managed, but the reality is the building blocks of these tests are the same as unit and integration, and you can use the same libraries and test runners in many cases.

Tests in this category are best suited to ensuring the happy path of a particular workflow is functioning as expected, with many assertions during the test execution.


Does process A complete within 100ms?

Performance testing diverges from the questions asked by unit, integration, and functional tests, because its primary question is no longer "Does it work correctly?" but "Does it work quickly enough?"

In some cases, you may be able to get a level of performance tests by adding timeouts to your other tests, which would alert you if those tests take too much time. While this does provide some information, you often want to check a duration by performing it dozens, hundreds, or thousands of times. In that case, or if you want to examine the memory or CPU usage, you often want to create tests specifically for that purpose.


Does my application perform as I expect when N users are active?

While some of the other types of tests may be creating a load on your application, they’ll rarely intentionally consider its impact. Your intention with load testing is to ensure you can scale to a level you deem required or to find the limit where the application breaks.

The need to simulate many users means these tests will almost always take a different form from the others. Sometimes, they will be very expensive to execute, as you will generate the traffic of thousands of users. Because of the cost, you should be deliberate about when you create and run load tests. They can provide valuable insights, but it may not be practical to run them on every build.


Is it possible to view information without the required permissions?

The security of any application is extremely important. While there may be a temptation to roll these tests into the other types, it’s often wise to keep them separate. With a set of tests specifically intended to inspect the security of your application, you can quickly understand how that testing is executed and what aspects of your application are covered.

There are a number of security tools available today, but there could also be tests written in the same framework as your unit tests to inspect HTTP responses to ensure they have the correct headers and tokens.

Choose your test wisely

Using different software testing methodologies is crucial to building a good product. Don’t forget—after you have determined the question you are trying to answer, weigh all the available tests: unit, integration, functional, load, performance, UX, and security. By narrowing your question and test, you can write the smallest, fastest, understandable, and maintainable test.

Docker is a trademark of Docker, Inc. in the United States and/or other countries.

About the Author

Matthew is a Software Architect on the Test Engineering team at Workiva, where he continues to apply his desire to understand complex systems by building testing frameworks and platforms.

Online registration is currently unavailable.

Please email events@workiva to register for this event.

Our forms are currently down.

Please contact us at

Our forms are currently down.

Please contact us at

Thank you

A Workiva team member will follow up with you shortly.

Thank you for registering

You'll receive a confirmation email shortly.

Thank you

You are now subscribed to receive blog updates.

Back to Top