» Startseite
 » Inhalt
 » Wir über uns
 » Leistungen
 » Mitglieder
 » Mitgliedschaft
 » Arbeitsgruppen
 » Veranstaltungen
 » Informationen
 » Forum
 
 » Mitgliedsbereich
 


GI-Gachgruppe 2.1.10 - Software-Messung und -Bewertung

Software measurement Laboratory

Swiss Software & Services Metrics Association

United Kingdom Software Metrics Association

Niederländische Software Metrik Association

Finnish Software Measurement Association

das virtuelle Software Engineering Kompetenzzentrum

International Software Benchmarking Standards Group - Member-Zugangsdaten im DASMA-Mitgliedsbereich

International Function Point User Group - Member-Zugangsdaten im DASMA-Mitgliedsbereich

Online Portal für Projektmanagement - Member-Zugangsdaten im DASMA-Mitgliedsbereich
» Startseite / » Veranstaltungen / MetriKon-Tagungen (Rückschau) / MetriKon 2006 / Tagungsbericht / Conference Report

Tagungsbericht / Conference Report

Conference Report

Thursday, 2nd November 2006

Frances Paulisch

Siemens

Measurement Program at Siemens

Siemens is a software company – although not recognized as such. Software is embedded in almost all of our products.

Siemens collects quantitative data for effort estimations. This works only if business groups participate early in the measurement system. Endorsing that system does protect the business units from other management initiatives, so this made it work.

The indicators are based on five levels: Attributes, Base measures, derived measures, indicators based on thresholds, and information resulting from analysis. For comparison, a common product life cycle model Plan – Define – Realize – Commercialize/Operate is used. Four measurement categories: Time / Size / Quality / Cost.

However, decisions on how e.g. Size is measured are still under way. The five selected indicators are

1. Schedule compliance

2. Budget compliance

3. Cost of Defect Correction

4. Feature changes

5. Defect Detection Profile

These metrics are compiled for business on four slides – schedule and budget compliance being combined in one. An annual “Measurement Excellence Day” helps to keep participants in the SMS loyal to the program.

Adam Trendowicz

Fraunhofer IESE, München

Using CoBRA® for establishing Cost Estimation Models

The model allows identifying cost drivers by setting up a causal model, using historic data, and a sizing method to yield a probability curve for estimation.

With Oki, the model was used for a large Japanese manufacturer, and it was possible to strongly improve its prediction accuracy up to a standard deviation of 22% and better.

Daniel Rodriguez

Using Generic Algorithms to Generate Estimation Models

Two datasets were used: ISBSG and NormWE. ISBSG rates worse in terms of correlation coefficient. Dividing the data set into coherent subsets improves estimation accuracy.

Statistical analysis based on linear regression was used. However, different fitness functions could yield even better results.

Oguz Atak

Middle East University, Ankara

Software Code Size Estimation

Size estimation is the weakest point in cost estimation. A better method for OO estimation method is particularly wanted.

However, the initial approach didn’t work, and comparing the regression equations for design and code did not give any new results. Furthermore, it is open how code size relates to cost, especially in an OO environment.

Luigi Buglione

École Technique Supérieur Montréal, Canada

Suggestions for Improving Measurement Plans

A BMP application (BMP = Balancing Multiple Perspectives) in Tur­key provided new insights into integrated software measurement

Balancing indicators is critical; it corresponds to the economical viewpoint: Cost of Quality vs. Cost of Non-Quality. Causal relation­ships alone are not enough. For BMP, a questionnaire is available on www.geocities.com/lbu_measure/questlime/bmp.htm.

Currently, project managers focus on time and cost; they should include quality somehow at least. And size is the base of all metrics!

Sylvie Trudel

CRIM, Montréal

Measurement Programs in Small Enterprises

Small enterprises often have their patron dealing with all aspects of project management, within their usual 60 – 90 hours week. Thus, a measurement program must be very effective.

A Pareto analysis of the defect origins was based on a simple 15 steps process model; the defects were categorized according Humphrey’s defect categories. COSMIC FFP was used for sizing; developers did like it more than IFPUG. However, sizing depends from who measures. And more, sizing depends upon the phase, because requirements tend to grow.

After they modified COSMIC somewhat (adding another point when­ever data was moved with more than 12 data attributes), their estimation worked perfectly.

Seeing the numbers (2 hours/Csfu development; 0.33 hours/Csfu per defects fix), it made them aware what to focus on. Introducing reviews did help a lot.

In a second case, the company wanted to reach a high CMMI level, in order to be able to compete with Indian big players. They docu­mented their processes on five pages only, and provided templa­tes for all work products. Measuring is now done within the hour, and all Change Request are valuated with an “expected benefit”, in order to set priorities right.

Frank Vogelezang

Sogeti, NL

Using COSMIC Full Function Points in ERP Environment

Sizing a business process with Cfsu relies on counting data move­ment as with software development, although ERP modules used as supporting software are then components only.

Sogeti calculates for instance time-to-delivery similar to the ISBSG Reality Checker, as an exponential function that depends from size.

Other uses of the size metrics include

Ø Stability rate

Ø Direct cost

Ø Scope creep

Ø Change management

Ø Technology choice

Learnings

COSMIC FFP is applicable based on process information only. With calibration, even estimations can be made for the expected cost and the time required for implementing new business processes.

Cigdem Gencel

Middle East University, Ankara

Mapping Concepts of Functional Size Measurement

A unification model is proposed for the three methods IFPUG, Mark II, and COSMIC FFP. It relies upon relating all the different items such as functional processes and logical transactions with their underlying data groups. These data groups are considered similar (which is not true for real-time embedded software).

The resulting mapping might be valid for data-rich measurements.

Carol Dekkers

Quality Technologies, USA

Combat the Resistance Against Measurements

Measurements are very often misunderstood, as measurements for people rather than processes. Accuracy is misunderstood as well, as managers associate with measurements something exact like the number of goals shot in the last soccer game.

The definitions are not clear and not comparable. But are speedo­meters exact? They aren’t. But they are good enough to base busi­ness decision upon it.

Correlations must be backed by data quality, consistency, and fit with people’s influence on the projects. Perceptions must be consi­dered, and acted upon. Comparisons must deal with apples only. Silver bullets don’t exist.

Friday, 3rd November 2006

Benoit Vanderose

University of Namur, Belgium

Measurement of the Quality of Design

Requirements translate into design, and design in turn into code. The relationships between these important trans­lations are all but straight and easy. For measurements, this observation means that translating functional size into code is all but monolithic. So what are the invariants that you can measure?

Some design attributes are directly related. For instance, the num­ber of UML diagrams corresponds to the number of Java modules. There are indications that such translations exist also where the relation is not fully direct, such as size of UML diagrams should somehow translate into the size of code.

Less direct are relationships such as legibility of a design that may translate into the number of errors in the code. However, the problem of the missing requirements is not addressed.

Luca Santillo

Consultant, Rome

Error Propagation in SW Measurement and Estimation

Uncertainty is a key attribute of every measurement in science. There are random errors and systematic errors, and moreover, uncertainty adds up from the many input variables.

Basic statistical theory tells us that such measurements, if inde­pendent, follow a standard distribution. Uncertainty is predicted by the 1st derivative of the error distribution function.

This theory can be applied to function point measurements. This answers all the criticism stating that IFPUG measurements do not follow the rules of measurement theory (because you multiply a scalar, non-continuous metric with the VAF factor).

The more input factors we add into consideration in order to refine an estimate, the more uncertainty our estimate has.

El Hachemi Alikacem

CRIM Montréal, Canada

Generic Metrics Extraction Framework

The presenter proposes a tool for extracting SW metrics out of source code in an OO development environment.

Each OO language requires a mapping and parsing module that trans­forms the code in arrow terms. The arrow terms are used to extract primitive factors from the code, with respect to their use and environment.

Metrics extracted are number of classes, of independent classes, total number of classes, and average inheritance depth, only for visible attributes.

In total, the tools collects more than 40 metrics from code. It supports Java; C++ is under construction.

Maya Daneva

University of Twente, NL

Functional Size Measurements for ERP Solutions

Current practice is writing Business Cases, maybe because no other decision criteria are available. Following financial customs, people consider ERP projects rather as a financial option that should bear profit over time.

The research project, called COSMOS, unites industry and university representatives and focuses on using COSMIC for functional sizing, integrated in to requirements engineering. The project addresses uncertainties by cataloguing them and trying to find actions to lower uncertainty.

Seems work in progress.

Alain Abran

École Technique Supérieur – Montréal, Canada

Survey of Automation Tools Supporting COSMIC FFP

A framework of tools for Functional Size Measurement (FSM) distinguishes Measurement Support, Measurement, Storage, and Utilization.

An expert system can be found on http://cosmicxpert.ele.etsmtl.ca/ it can be used for free, when asking for a password.

Automated measuring support is available with some tools, such as the commercial MeterIT from Telmaco, and the free SIESTA from Sogeti (ask Frank Vogelezang, or siesta@sogeti.nl). However, the level of automation is limited, as it is still required to input the number of data movements, or to identify functional processes, at least. The ideal would be to extract functional processes from some requirements specification framework.

Stephan Frohnhoff

sd&m München

Use Case Points in the Industrial Environment

With Use Case Points, sd&m reaches a 6% average accuracy with 27% standard deviation. Thus it is as good as any other estimation method, and serves well its needs for estimating large turnkey appli­cations.

sd&m has evaluated their productivity factor thanks to a sensible accounting scheme, and then Use Case Points produce the correct estimate. The key to successfully defining the productivity factor is correct accounting. Furthermore, you have to identify the charac­teristics of the different application cases.

The use case method is not suitable for estimating technical improvements and maintenance.

Thomas Fehlmann

Euro Project Office, Zurich

When to use IFPUG? When COSMIC?

Six Sigma has become a major drive in industry and is rapidly gaining interest in software development and maintenance as well. The Six Sigma management strategy focuses on measurements for reducing defects early in the value chain processes

What is a Defect in Software Development?

Six Sigma is about eliminating defects in the value chain processes. A defect is a mistake or wrong behaviour of the product, or in the service, that affects customer’s needs.

Functional sizing measurements are the foundation for all Six Sigma metrics. Its mastery is a must for all Six Sigma Green and Black Belts that dare to deal with IT processes, be it in development or operations. However, which measurement method suits better to Six Sigma, the well-established IFPUG 4.2 Function Points Analysis, or the more modern ISO standard ISO/IEC 19761, known as COSMIC FFP V2.2?

Interestingly, both measurement methods seem rather compli­men­tary than competing when used in a Six Sigma setting, a setting rather targeted for defect avoidance than for project estimation with commercial or engineering back­ground. The two methods serve different purposes.

Ton Dekkers

Shell Information Technology

SPI, is this the benefit?

ISBSG is upgrading their repositories and will issue two more data sets for maintenance and for testing projects in the near future.

According the Goal – Question – Metric schema, the following five metrics are collected: Project Delivery Rate, Speed of Delivery, Defect Density, Time Schedule, Effort Cost.

The target is: 90% of the projects should be within schedule, budget, and within planned scope. That target was easy to achieve, using the “GPS”–method of adapting goals to the targets met.

What did help was comparing the metrics with the industry bench­marks. Revised estimations based on industry benchmarks proved much better than the original values based on Shell’s own history-based data. They were less biased than their own.

» Aussteller / Exhibitors

» Fotos / Pictures

» Hauptredner / Keynotes

» Rückmeldungen / Feedback

» Tagungsbericht / Conference Report

» Tagungsprogramm / Schedule


(c) 2002-2016 DASMA e.V. Impressum
aktualisiert am: 9. Nov 2006