I participated in the conference and would like to share my impressions and notes.
I really liked the idea of having such a conference, to get some exposure to what’s going on in the academic world regarding SW testing. This, together with promising talks from industry practitioners, was a good reason to head south and participate.
The hosting was great and the organizers deserve a good word for their efforts. The conference was held in the university “Senath auditorium”, which is a very pleasant and cozy (although well air conditioned) place. Everything was well organized.
The first talk, after a few greetings was by Dr. Jasbir Dhaliwal from the University of Memphis who talked about his experience in collaboration with the industry. He established the Systems Testing Excellence Program (STEP) in collaboration with Fedex.
He talked about the challenge of collaboration between the scientific approach of the academia and the art approach of the industry practitioners, which he called the Science – Art gap.
During the day, several speakers referred to the different approaches between the industry and the academia, where the academia is focused on verification and the industry is more concerned with validation.
Nir Caliv, a Principal Test Architect from Microsoft Israel, talked about Continuous Delivery and Test in Production. He talked about how the shift from producing boxed software packages to services in the cloud changed their development and testing methodology. For example, shifting from long development and test cycles that ends in mass distribution to very short ones that reach the production environment very fast. He talked about the impact of the need and ability to change the SW quickly. He mentioned the move from modeling the system to using the “Real thing”, and from simulation of the end-user environment to monitoring of the production service itself. Monitoring gained a much more significant place than in the past and “Quality features” that enabled this monitoring were added to the product. The analysis of the monitored data become a major role of the “SW Development Engineer in Testing” as Microsoft calls their testers. They changed from passive monitoring to data mining that involves machine learning.
It seems that when the monitoring capabilities role changed from being a “luxury” to a “necessity”, it forced them put a large effort on this area. This being done can teach organization whose product nature has not yet made the move to look at the benefits of this direction for their testing, and invest more in monitoring capabilities and gathering data from the production environment.
In the lobby, Dr. Avi Ofer from ALM4U displayed a demo of a new method for verification using temporal logic . This method started with Hardware verification, but he shows how it can be used in Software verification, including the ability to go back in time and debug by replaying the actions without running the debugger again.
Dr. Meir Kalech from the Ben-Gurion University talked about “Artificial Intelligence Techniques to Improve Software Testing”. In his talk, he described a project that uses model based approach to diagnostics in complex environment like software, based on the IEEE Zoltar toolset for automatic fault localization. His system suggests scenarios for failure isolation. While he suggested that the system be used to suggest the “next step” to testers that experience a failure, my view is that this is more suitable to be an addition to automation checks than as a tool for human beings.
Ron Moussafi from Intel talked about "The Fab Experience: What Intel Fabs can teach us about Software”. Ron has a lot of experience in the semiconductors manufacturing world, but he is currently managing two test departments, one of them, I work in. He talked about the difference in the approaches of the two industries. While the fab(Semiconductor fabrication plant) has a culture which focuses on quality, and quality is a main factor that is measured with no tolerance for incompatibility, the software industry culture is focused on other aspects. One of the causes of the difference is that in the manufacturing world the cost of one error is very noticeable, while in the SW industry, most of times the cost of error is hard to measure, but has the impact of “death by a 1000 papercuts”. While “Fab” people see themselves as scientists, the image of the SW guy is more of a “Lone ranger”. Ron suggested some actions to improve the quality culture of SW development organizations, like calculating and publishing the cost of defects and connecting between failure and the cause, in order to improve the process, instead on just focusing on fixing the problem.
Prof. Mark Last from the host University talked about Using Data Mining For Automated Design of Software Tests. He showed a method that does data fuzzing on a legacy “known to be good” software and uses data mining techniques in order to select a regression package and run it on a new version of the software.
Prof. Ron F. Kenet from KPA Ltd. talked about "Process Improvement and CMMI for Systems and Software". Since I am not really interested in the subject, I did not take notes on his talk. However, one of the examples he gave did grab my attention as it can be connected to one of the other talks. He demonstrated measurement of point of fault during the development process, which can be useful if you want to try Ron Moussafi’s suggestion to connect between failure and cause.
Dr. Amir Tomer from the Kinneret Academic College talked about "Software Intensive Systems Modeling - A Methodological Approach”. He described his method of using UML to describe SW systems. In his method, each entity has four characteristics: The ones that relate to the requirements from the entity: the environment it operates in and the services it provides, and ones that relate to the design: the structure and the behavior. He showed how he connects between the entities so the services and behavior of one entity, are actually the environment of the other entity.
Scott Barber is a well-known tester, especially if you are interested in performance testing. With no offence to the other speakers, he was the main reason for my participation in the conference. Scott’s talk was "A Practitioner's View of Commercial Business's Desired Value Add from Software Testing”. He was able to educate not only the academic participants, but also the industry people on what is the view of the Commercial Business's decision makers of Testing and the challenges of explaining the added value of testing to them. This can be quite difficult when they think about testing as an additional thing they have to pay for having software built, like caffeine is needed, and don’t distinguish between Testing and quality. Scot’s said that “the industry pays for value to get to the market sooner, faster and cheaper”, these are the things that you want to focus on and make sure to communicate your contribution to their achievement.
I was not able to participate in the panel discussion afterwards “Product Quality and Software Quality: Are They Aligned?”
To summarize, the conference was very interesting and exposed the participants to different views of Testing. While, as practitioner I was not familiar with the language and approach of the academic speakers and I have to admit that I giggled a bit when they talked about a “lines of code” or gave an example in FORTRAN, it was refreshing to hear such different views.
The academic engagement with Testing is in its early phases which mean that it has lot of space for great things to happen. This conference was a big step in this direction and a good way to connect between people from the two disciplines.
As an Intel employee, I am proud that Intel was a major sponsor of such a great event, which was open to anyone who has an interest in our field.
While the Verification aspects of the profession were well heard, I was missing academic speakers from other disciplines which have no less impact on our practice and I would argue that even more. Cooperation with researchers in the areas of Psychology, Anthropology, Education and more, may able the academia to address more of the industry’s needs and be more relevant to the industry.