Proof Measurement and Estimation is an activity of the Trustworthy Systems group.


Latest news:
Aims: understand how to measure and estimate largescale formal verification projects by:
Milestones:
Context:
In traditional software engineering, development teams start with a requirements specification, design the system, implement it in a programming language, and then create and run test cases to check whether the system is built as designed and as required. However, because computer systems are so complex, testing can never check all possible behaviors of the system.
Software engineering with formal methods is different. Teams start with a formal specification, which is a mathematical description of the requirements and/or design. Teams then use mathematical proof to show that the implemented system functions as designed and as required. This is called formal verification, and can provide an exhaustive check of all possible behaviors of the software. (Given some reasonable idealising assumptions about the underlying platform.)
The formal verification projects undertaken at NICTA create large machinechecked mathematical proofs, authored by teams of people. However, running these large formal verification projects is a challenge. Estimating the size and effort of these projects is a key management challenge, but almost nothing is known about how to do this well.
Technical research challenges:
Previous Events:
Publications:
Daniel Matichuk, Toby Murray, June Andronick, Ross Jeffery, Gerwin Klein and Mark Staples Empirical study towards a leading indicator for cost of formal software verification International Conference on Software Engineering, pp. 11, Firenze, Italy, February, 2015  
Ross Jeffery, Mark Staples, June Andronick, Gerwin Klein and Toby Murray An empirical research agenda for understanding formal methods productivity Information and Software Technology, Volume 60, pp. 102–112, January, 2015  
Mark Staples, Ross Jeffery, June Andronick, Toby Murray, Gerwin Klein and Rafal Kolanski Productivity for proof engineering Empirical Software Engineering and Measurement, pp. 15, Turin, Italy, September, 2014  
Mark Staples, Rafal Kolanski, Gerwin Klein, Corey Lewis, June Andronick, Toby Murray, Ross Jeffery and Len Bass Formal specifications better than function points for code sizing International Conference on Software Engineering, pp. 1257–1260, San Francisco, USA, May, 2013  
Mark Staples Finding and responding to failure in large formal verifications Abstract, Dagstuhl Reports, October, 2012.  
June Andronick, Ross Jeffery, Gerwin Klein, Rafal Kolanski, Mark Staples, Jason Zhang and Liming Zhu Largescale formal verification in practice: A process perspective International Conference on Software Engineering, pp. 1002–1011, Zurich, Switzerland, June, 2012  
Jason Zhang, Gerwin Klein, Mark Staples, June Andronick, Liming Zhu and Rafal Kolanski Simulation modeling of a large scale formal verification process International Conference on Software and Systems Process, pp. 3–12, Zurich, Switzerland, June, 2012 
Past
