Software Testing: PedrVO.com
Tools Used in Software Testing
Yomu – Multi Purpose log File Analyser
yomu is a Log File Analyser programmed in Python Jenkins testing process. I personally use yomu mainly for visualisation of performance test results. The visualization of the test results is presented and distributed as an image or a complete HTML report. yomu is basically a Python script template that can be easily adapted to fit your needs when analysing all kinds of bulk file contents.
I have a fairly good command of the SQL programming language which I wanted to leverage during my daily work of log file analysis. This is the fifth complete reimplementation and now I am quite happy with it.
The logfile analysis is split into the following processing steps:
- log file parsing
- data clean up (timestamp mangling, removal of duplicate records, etc.)
- data aggregation
- Sample djangocp continuous performance testing GUI
django-cp is a tool for implementing continuous performance tests. The tool was built for my favourite web framework django. The offspring was made during a sprint at PyConUK 2008 in Birmingham. I sat together with Jacob (one of the lead developers) to discuss what would be helpful in order to identify performance regressions of changes and patches of the django project. The basic idea was to reuse the Mozilla Graph-Server to graphically monitor performance regressions.
Sample SLOC and McCabe metrics output metrics
The metrics tool was implemented to create software metrics for big code bases like Mozilla Firefox, Apache Web Server, or the Python code base itself. One common attribute of big code bases is that they usually contain code from multiple programming languages. Each programming language itself has already a metrics package but unfortunately the metrics are not compatible to each other. For example each metrics implementation has its own philosophy towards what a line of code could be like whether comments are counted or how statements that are spread over multiple lines are counted. From my perspective the only viable approach is to create metrics for the different languages from one package that is able to count the metrics in a consistent way.
Sample quality visualization with visualizer
visualizer is a set of tools for visualizing software quality. I have been working exclusively on testing software for over five years now. During this time I was constantly reimplementing tools for visualizing test results like for example log file parsing. I was using different open source packages for visualization because my taste for software libraries was also evolving.
During the past year I wanted to evolve the visualization tool-set again and I evaluated some tools like Matplotlib, R, Processing, John Resig’s Processing.js, and Jit. I read some books about Statistics and Graphics. At one stage I found myself rethinking my approach to visualization altogether. Meanwhile I read through a bunch of books and hundreds of papers, PhD thesis, and websites on information visualization. In January 2010 I went to Dallas for a tutorial on ‘Presenting data and information’ by Professor Edward Tufte. With his inspiration I was able to work out a more effective approach to visualization of software quality.
I used the 2009 XMas break to lay some groundwork for the new visualizer. The new tool is now based on YUI3 and CouchDB and so far I am very happy with it. Of course the logfile parser is implemented in Python. Meanwhile the data store has been replaced by a custom webservice and the PostgreSQL database.
The Hitchhiker’s Guide to Test Automation
If you read this you are already almost save because ‘The Hitchhiker’s Guide to Test Automation’ has printed ‘DON’T PANIC’ on its cover in capital letters.
The Hitchhiker’s Guide to Test Automation is a collection of best practice approaches to application of open source testing tools. Goal of this guide is to ease the adoption of open source testing tools for all kinds of software development projects.