The Problem With Performance Testing
- NetSecOPEN
- Aug 13, 2017
- 1 min read

This is part of an ongoing blog series featured on the NetSecOPEN Website by a number of project participants ranging from tool vendors, network security vendors, and third-party test houses.
Performance testing is used throughout our industry. It helps make decisions. It helps build infrastructure. It helps make sales. But where do we get the data for this performance testing? Should we trust it? How can we use it best?
The unfortunate reality of performance testing is it’s all different. Every vendor, every enterprise, every analyst does their own thing. You probably will never know all the details of the test methodologies, device configurations, or other conditions of the tests. Yet all the results are presented in a similar language and format, which would lead you to believe you can compare different test results to each other. These results might be shown on a datasheet, or in a publication, or even online. Unfortunately, this provides a lot of differing (and possibly contradictory) information, which leaves the end consumer with the problem of trying to figure out what it all means.


Comments