Tuesday, July 31, 2007

Data Slice

Example:


Source: Slice-Based Cohesion Metrics and Software

The Mood Metrics Suite

The MOOD metrics suite consists of six metrics:

1 Method Hiding Factor (MHF): The MHF is defined as the ratio of the sum of the invisibilities of all methods defined in all classes to the total number of methods in a design.

2 Attribute Hiding Factor (AHF): The AHF is defined as the ratio of the sum of the invisibilities of all attributes defined in all classes to the total number of class attributes in a design.

3 Coupling Factor (CF): The CF metric is defined as the ratio of the number of class couplings to the maximum possible number of class couplings in a design.

4 Method Inheritance Factor (MIF): The MIF metric is defined as the ratio of the number of inherited (and not overridden) methods in all classes to the total number of available methods (locally defined plus inherited) for all classes in a design.

5 Attribute Inheritance Factor (AIF): The AIF metric is defined as the ratio of the number of inherited attributes in all classes to the total number of available attributes (locally defined plus inherited) for all classes in a design.

6 Polymorphism Factor (PF): The PF metric is defined as the ratio the number of methods that redefine inherited methods to the maximum number of possible distinct polymorphic situations.

Source: Indicators of Structural Stability of Object-Oriented Designs: A Case Study

Lack of cohesion in methods

Example (Source: OMPSCI702 Software Measurement The "CK" Metrics)


The CK Metrics Suite

The metrics are proposed by Chidamber and Kemerer.

Exercise of Coupling between Object Classes (CBO)

Reference: COMPSCI702 Software Measurement The "CK" Metrics

Response for a class

When a large number of methods of a class can be invoked in response to a single message, the testing and debugging of the class becomes complicated. (Source: RFC)

Example (Source: COMPSCI702 Software Measurement The "CK" Metrics)

Exercise of Coupling between object classes (CBO)

What is the CBO?

(A) 3
(B) 4
(C) 5

Source of the picture: Identify Collaborations

Exercise of Depth of the inheritance tree

Why does some software practitioner say that a value of DIT greater than 4 will have to compromise encapsulation and increase complexity?

(Source: Depth of the inheritance three )

Exercise of cyclomatic complexity

What is the cyclomatic complexity below?
(A) 4
(B) 5
(C) 6


#!python
def func(x):
if x==0:
return 3
elif x==1:
return 4
elif x==2:
return 5
else:
return 0

Weighted Methods per Class

Sum the complexity of each method in a class. The complexity of each method can be the cyclomatic complexity.

Cyclomatic Complexity is a procedural rather than an OO metric. However, it still has meaning for OO programs at the method level (source: McCabe's Cyclomatic Complexity )

Reference: CS 696: Advanced OO

Monday, July 30, 2007

In-process metrics

In-process metrics are measures related to the efficiency of software processes.

Example: to track actual testing progress against plan and therefore to be able to be proactive upon early indications that testing activity is falling behind (Source: In process metrics of software testing).

Code Coverage

  1. Statement Coverage - Has each line of the source code been executed and tested?
  2. Condition Coverage - Has each evaluation point (such as a true/false decision) been executed and tested?
  3. Path Coverage - Has every possible route through a given part of the code been executed and tested?
  4. Entry/Exit Coverage - Has every possible call and return of the function been executed and tested?
(Source wiki)

Testing Effectiveness


“Trying to improve quality by increasing testing is like trying to lose weight by weighing yourself more often.” McConnell, S., Code Complete, Microsoft Press, 1993. (Source: Testing Effective Assessment )


Source: Measuring the Effectiveness of a Test

Defect-related metrics

Source: Bugs per line of code (Also known as defect density)




Reference:
(1) Change and Defect Models and Metrics
(2) Six Sigma Software Metrics


Sunday, July 29, 2007

Metrics for Source Code (Halstead's Theory)

Four scalar numbers are used to measure a program

n1 = the number of distinct operators
n2 = the number of distinct operands
N1 = the total number of operators
N2 = the total number of operands

Source: Halstead Complexity Measures


User Interface Metrics

Some examples of UI metrics

(1) Information per screen (i.e. number of field between two enter keys per screen). Note that this is a measure more than a metric.

(2) Differentiation: Cohesion is measured in terms of relationship between data on one screen.


(3) Structuring:
Every single input screen is presented as a graph note. The number of paths between various nodes should be as high as possible.


Source: http://portal.acm.org/

Lines Of Code

A line of code is every line but comment lines or blank lines. As you can see, this metric is not very representative. You may break down a single line as two lines. However, LOC is very popular.

LOC=1
for (i=1; 10; 2) { a = a + i ; }

or

LOC=4
for (i=1; 10; 2)
{
a = a + i ;
}

Reference
(1) Software metrics: good, bad and missing

Architectural Design Metrics

Fan-out indicates the number of functions a function calls. Modifying a function can result in the functions that are called by the modified function. (Source)

Structural Complexity of a module i is
S(i) = fan-out(i) * fan-out(i)

More example of fan out

Function Point (FP)

Product Metrics Landscape

Metrics for the analysis Model
Functionality Delivered (E.g. Function Point)
System Size (Eg. LOC)
Specification Quality

Metrics for the design model
Architectural Metrics
Component-level metrics
Interface design metrics
Specialized OO Design Metrics

Metrics for source code
Halstead Metrics
Complexity Metrics
Length Metrics

Metrics for testing
Statement and branch coverage metrics
Defect related metrics
Testing Effectiveness
In-process metrics

Reference:
(1) Measuring the Effectiveness of a Test

Friday, July 27, 2007

Fault-Based Testing

Software testing using test data designed to demonstrate the absence of a set of pre-specified faults; typically, frequently occurring faults. For example, to demonstrate that the software handles or avoids divide by zero correctly, the test data would include zero. (Source)

Black Box Testing

  1. Easy-to-compute data
  2. Typical data
  3. Boundary / extreme data
  4. Bogus data

Some References
(1) black box texting

Boundary Value Analysis

Boundary value analysis is a software testing design technique to determine test cases covering off-by-one errors. The boundaries of software component input ranges are areas of frequent problems (Wiki).

Example: checking if (month > 0 && month < 13)

Equivalence Partitioning

An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class.
(Source)

Reference
(1) Equivalence Partitioning Example


Data Flow Testing

Data flow testing criteria are based on data flow information, i.e., variable definitions
and uses.

DEF(S)={X| statement S contains a definition of X}
Explanation: A variable v is defined by a statement if the execution of the statement updates
the value associated to v. For example, v=1 or v=sqrt(2)

USE(S) ={X| statement S contains a use of X}
A variable v is used by a statement S if the effect of statement S depends on the current
value of v, for example b=v and if (v==1)

Usage node, USE(v,n), is a node in the program graph where the specific variable, v, is used.

A Definition-Use path, du-path, for a specific variable, v, is a path where DEF(v,i) and USE(v,e) are the initial and the end nodes of that path.

A Definition-Clear path for a specific variable, v, is a Definition-Use path with DEF(v,x) and USE(v,y) such that there is no other node in the path that is a defining node of v.

More examples, see Table 14.1 in Data Flow Testing

References :
(1) Teaching “Data Flow Testing” in an Software Engineering Course
(2) Couple of more testing methods

Common types of computer bugs with pseudocode example

  1. Divide by zero:   B=0 ; A=A/B;
  2. NULL pointer dereference:
  3. Infinite loops: For i=10 to 1 step 1
  4. Arithmetic overflow or underflow:
  5. Exceeding array bounds: Define A[10] ....B=11; A[B]=1;
  6. Using an uninitialized variable :
  7. Accessing memory not owned (Access violation)
  8. Memory leak or Handle leak :
  9. Stack overflow or underflow :
  10. Buffer overflow :
  11. Deadlock :
  12. Off by one error :
  13. Race condition :
  14. Loss of precision in type conversion :
Source: Wiki

Stress Testing

Stress Testing

Load Testing is subjecting a system to a statistically representative (usually) load. "Load testing" is merely testing at the highest
transaction arrival rate in performance testing.

Performance Testing: See the text book


Alpha and Beta Testing

Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.

Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the company. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

Source: wiki

Object Oriented Testing

Reference: Object-oriented Testing

Thursday, July 26, 2007

Smoke Testing

In plumbing, a smoke test forces actual smoke through newly plumbed pipes to find leaks, before water is allowed to flow through the pipes (wiki).

In software testing, smoke testing is a preliminary to further testing, which should reveal simple failures severe enough to reject a prospective software release. In this case, the smoke is metaphorical (wiki).

Example of Smoke Test as Pre-release Testing:
Mozilla.org performs a daily build, and runs adaily minimal “smoke test” on the build for several major platforms, in order to ensure the build is sufficiently stable to allow development work on it to proceed. If the build fails, “people get hassled until they fix the bits they brokes." If the smoke test identifies bugs, they are posted daily so that developers are aware of any serious problems in the build.
(Source: Two Case Studies of Open Source Software Development)

Criteria for Completion of Testing

When are we done testing enough? The answer is simple, I think.

Since there is no software that will never fail and testing can be time-consuming, the straightforward answer can be good enough. Unfortunately, the term, good enough, is subjective.

Strategic Issues (Antibugging)

Antibugging is similar to the concept of Poka-yoke in Toyota Production System (also known as Just-in-time).

Verification and Validation

"Testing does provide the last bastion from which quality can be assessed and more pragmatically, errors can be uncovered. But testing should not be viewed as a safety net."

Why? (Hints: This is why we need software verification.)

Organizing for Software Testing

Why is there an inherent conflict of interest in a software team? Here is an hint!

There's a natural conflict between testers and programmers because of the difference in perspective each role has. In the simple view, programmers are centered on creation: they make things that didn't exist before. Like most creators, programmers have a natural optimism about making new things and solving problems. From How to End Wars Between Testers and Programmers

15 Product Metrics

15.2.6 Product Metrics Landscape

15.4.3 The CK Metrics Suite

14 Testing Tactics

14.5.2 Data flow testing

Equivalence-partitioning

Boundary-value-analysis

Black-box-testing

Fault-based-testing

Exercise of Cyclomatic Complexity

13 Testing Strategies

Organizing for Software Testing

Verification and Validation

Strategic Issues Antibugging

Criteria for Completion of Testing

Smoke Testing

Object Oriented Testing

Alpha and Beta Testing

Stress Testing

Miscellaneous Lists
Common Types of Computer Bugs

What is SAP Business One

SAP Business One is to meet the needs of small business. There are 14 core modules (wiki)

SAP hopes to break into the small-enterprise market with the acquisition of Israeli's TopManage Financial Systems. As you may guess, the financial functions should be stronger than manufacturing features.

Analysis of the Work Enviornment

This topic can have been popular in software ergonomics

Readers may refer to A Comparative Analysis of Style of User Interface Look and Feel in a Synchronous Computer Supported Cooperative Work Environment

SAP Business One

What is SAP Business One

Wednesday, July 25, 2007

User Interface Design Process

(1) Analysis: Understand your behavior and tasks
(2) Interface Design: Work out details
(3) Interface Construction: Develop prototypes or interface templates
(4) Validation: Test the work products


Reference The User Interface Dsign Process Overview

12 Performing User Interface Design

12.2.2 The Process (User Interface Design Process)

12.3.4 Analysis of the Work Enviornment

Miscellaneous Lists

Program Design Language

Program Design Language is a mix of pseudocode and natural language so that it can serve the purpose of developing code and software documents.

Confusing? The existence of PDL explains that pseudocode alone is not good enough for documentation!

Still Confusing! Have a look at a very good article Using PDL for Code Design and Documentation

Basic Design Principles

  1. The Open-Closed Principle
  2. The Liskov Substitution Principle
  3. Dependency Inversion Principle
  4. The Interface Segregation Principle
  5. The Release Reuse Equivalence Principle
  6. The Common Closure Principle
  7. The Common Reuse Principle

Coupling

Content Coupling
Common Coupling
Control Coupling
Stamp Coupling
Data Coupling
Routine Call Coupling
Type Use Coupling
Inclusion or Import Coupling
External Coupling

Collaboration Diagram

Elements of a Collaboration diagram

A Collaboration diagram have three main elements: object, Relation/Association and Messages.

OCL

Easy-to-learn Reference:
Interactive OCL Tutorial

11 Modeling Component-Level Design

11.1.1 An Object-Oriented View (Component Diagram)

11.2.1 Basic Design Principles

11.2.4 Coupling

11.3 Conducting Component-Level Deisgn (Collaboration Diagram)

11.4 Object Constraint Language

11.5.3 Program Design Language

Component Diagram

In UML 2.0, a component is drawn as a rectangle with optional compartments stacked vertically.

Agile Architecture Modeling

Same techniques as architecture modeling before, agile modeling puts emphasis on just enough modeling. The purposes are

Improved productivity: Some of the critical technical issues can be potentially avoided and therefore our software increases its software productivity.
Reduced technical risk: What we model doesn't mean what we have to build it. We should not overbuild our system. (This point is highly related to the above)
Improved communication: Our software team understand better what we think we are going to build and how we think that we will build it
Scaling agile software development. Software architecture provides the technical direction required by sub-teams to define and guide their efforts within the overall project.

For more information, please refer to Agile Best Practice: Initial High-Level Architectural Modeling

Exercise

This is a short report, Structured Design Using Flowcharts, in which flowcharts are used instead of data flow diagrams? Which one will be more useful when you are writing a CRC application or a MasterMind game? Explain your answer

Mapping Data Flow into a Software Architecture

Question: How can we map the following diagram A into B?

Figure A

V
V
V
V

Figure B

Both diagrams are from www.cs.njit.edu/~kirova/ppt/sec3e-e.ppt

Call and Return Architecture

Example: This is an extremely common structure for many types of systems (page 307, Pressman)



Source: https://calnet.berkeley.edu/developers/documentation/v2TransitionGuide/index.html

Layered Architecture

Example


Source: http://www.answers.com/topic/tcp-ip?cat=technology

Data-flow Architecture

Example

Data-centered Architecture

Example
Source: http://www.sqlsummit.com/Articles/LogicInTheDatabase.HTM

Architectural Complexity

For software design, the complexity is often referred to the concept of coupling.

Architecture Trade-Off Analysis Method

According to SEI, The main part of the ATAM consists of nine steps separated into four groups:

  1. Presentation, which involves exchanging ideas through presentations
  2. Investigation and analysis, which involves assessing key quality attribute requirements vis-a-vis architectural approaches
  3. Testing, which involves checking the results to date against the needs of all relevant stakeholders
  4. Reporting, which involves presenting the end results

http://www.sei.cmu.edu/publications/documents/00.reports/00tr004.html

Tuesday, July 24, 2007

Describing Instantiations of the System

Before interpreting a System-definition, a consistent subset is chosen. This subset is called an instance of the System-definition. A system instance is an instantiation of a system type defined by a System-definition. (From System)

What are the difference between sub-systems and instances? (Hints: sub-systems can be divided into more sub-systems)

Refining the Architecture into Components

An archetype model is then refined into components.

(1) If a conventional approach is chosen, components can be derived from the data flow model. See DFD Example

(2) It is also possible to describe the more details in terms of functionality as shown in page 302 of the Pressman's book

Defining Archetypes

An archetype is a generic, idealized model of a person, object, or concept from which similar instances are derived, copied, patterned, or emulated (wiki).

For example, an archetype for a car: wheels, doors, seats, engine

In software engineering, an archetype can be a number of major components to describe what we want to build.

Representing the system in Context

A System Context Diagram (SCD) is the highest level view of a system, showing a target system and its input and output from/to external actors.


see DFD Example

10.2.2 Data Design at the Component Level

Low-level data design decisions should be deferred until late in the design process. Example:

(1) Less Details
Entity relationship Diagrams
Business process Diagrams

(2) More Details
User feedback documentation

(3) Very Details
Have the above ready before you go to Server model diagrams which show tables, columns, and relationships within a database.

10.2 Data Design

In the analysis model, we may have designed different data objects. Example:
customer_name
customer_id

In the data design, we complete them by defining strutures in details.
Example:
customer_name char
customer_id int
customer_last_updated date (system used only)

Exercise

When will we write software without design in reality? What will be the impact?

10 Creating an architechural design

10.2 Data Design
10.2.2 Data Design at Component Level

10.3.1 Data Centered Architecture
10.3.1 Data Flow Architecture
10.3.1 Layered Architecture
10.3.1 Call and Return Architecture

10.4.1 Representing System in Context
10.4.2 Archetype
10.4.3 Refining the Architecture into Components
10.4.4 Describing instantiations of the System

10.5.1 Architecture Trade-off-Analysis Method (ATAM)
10.5.2 Architectural Complexity

10.6.4 Transaction Mapping

Miscellaneous Lists
Agile Architecture Modeling
Exercise

Monday, July 23, 2007

Pattern-Based Software Design

Design Patterns are successful solutions to a group of similar problems.

Frameworks are semi-complete applications for development

Classes are libraries that are self-contained modules

Reference: see http://www.cs.wustl.edu/~schmidt/PDF/patterns-intro4.pdf

? Design Model

Data Design Elements
Architectural Design Elements
Interface Design Elements
Component-Level Design Elements
Deployment-Level Design Elements

Design Concepts

Abstraction

  1. Procedural Abstraction
  2. Data Abstraction

Architecture

  1. Structural Model
  2. Framework Model
  3. Dynamic Model
  4. Process Model
  5. Functional Model

Pattern

Modularity

Information Hiding

Functional Independence

  1. Cohesion
  2. Coupling

Refactoring

Design Classes

  1. User Inferface Classes
  2. Business Domain Classes
  3. Process Classes
  4. Pesistent Classes
  5. System Classes

Software Requirements Specification

The characteristics of a great SRS should be:
a) Correct
b) Unambiguous
c) Complete
d) Consistent
e) Ranked for importance and/or stability
f) Verifiable
g) Modifiable
h) Traceable

9 Design Engineering

9.3 Design Concepts
9.4 Design Model
9.5 Pattern-based Software Design

Miscellaneous Lists
Exercise

Sunday, July 22, 2007

Exercise of Analysis Model

How does the following diagram fit for a description of analysis model as "throughout analysis modeling, the software engineer's primary focus is on what not how. What objects does the system manuplate, what functions must the system perform, what behaviors does the system exhibit, what interfaces are defined, and what constraints apply?"

State Diagram

Please refer to "State Diagrm in UML"

DFD Example

DFD is expressed using an intermediate language similar to the C language.

The images below are from the lecture notes passed out by Prof. Shubashish Dasgupta in September 2005. Please refer to http://www.marcoullis.com/KNOWLEDGE/SYSTEMS/marcoullisp_systems_process_modelling.html

The DFD diagrams are produced in Demarco and Yourdon standard, different from Gane and Sarson standard.

DFD context level diagram


DFD level 0 diagram


DFD level 1 diagram

Flow-oriented modeling


Please note that there are two standards: "Gane and Sarson Standard (above)" and "Demarco and Yourdon Standard"

Writing use cases

Please refer to Basic Use Case Template

My short note is:

(1) Define actors
E.g. "Customer", "Shipping System"

(2) Write down names of scenarios (i.e. use cases) which may include a number of activities
E.g. "Create Account" , "Log In" , "Check Out"

(3) Group the use cases as system boundary which defines the scope of what a system will be.

(4) Use arrows showing which actors are involved in which use cases

Then you can finish a high-level use diagram.

More Explanation: http://www.developer.com/design/article.php/2109801
Example: http://www.agilemodeling.com/images/models/useCaseDiagram.jpg

Tools to Draw Diagrams

(1) Dia

Data Objects

Is a data object the same thing as an object-oriented class? No.

Example
(1) Data Object implemented as a class
public class Car {
private Engine engine;
private boolean can_load=false;
}

(2) There can be reference within a class object to operations that act on the data.
public class Car {
private Engine engine;
private boolean can_load=false;
public Car()
{
engine= get_engine();
}
final public boolean can_load()
{
return can_load;
}
private Engine get_engine()
{
Engine my_engine;
can_load=true;
my_engine=Engine.getobject(this) ;
can_load=false;
return my_engine;
}
}

Note: the source code is from http://www.ibm.com/developerworks/cn/java/l-single-call/index.html

Analysis Model Approaches

There are two main approaches

(1) Structured Analysis
See Data Flow Diagram on wiki

(2) Object-oriented Analysis
See object-oriented analysis on wiki

Domain Analysis

Have a look at domain knowledge

Analysis Rules of Thumb

It has been suggested that we should try to separate analysis from design.

However, since software design will give feedback to requirments analysis, some design invariably occurs as part of analysis.

Note that coding provides feedback to software design.

8 Building The Analysis Model

8.1 Analysis Model
8.1.2 Analysis Rules of Thumb
8.1.3 Domain Analysis
8.2 Analysis Model Approaches
8.3.1 Data Objects
8.5.1 Writing Use Cases
8.6 Flow-oriented Modeling
8.6.1 DFD Example
8.6.3 State Diagram

Miscellaneous Lists
Tools
Exercise

7 Requirements Engineering

7.2 Requirements Engineering Tasks
7.2.7 Requirements Management
7.6 Technical Representation
7.8 Validating Requirements

Miscellaneous Lists
Software Requirments Specification
Requirements Tools

Technical representation

Examples

(1) Use Case
(2) Class Diagram
(3) UML State Diagram
(4) Flow Modeling
(5) Analysis Patterns

Analysis Model

The analysis model is the first technical representation of a system to be built.

The model should covers:
(1) what customers need
(2) what software design can be
(3) what can be validated once the software is built

Requirments Management Tool

Here is an open source requirments tool.

http://sourceforge.net/projects/osrmt

Validating Requirments

Some requirments are unnecessary and some are ambiguous.

Therefore, having an independent team such as Internal Audit help review the requirments so that no resource will be put for developing something unncessary.

A review meeting can be held to answer:

(1) Is each requirement consistent withe the overall objectives for the system/product?
(2) ...
(3) ...

Saturday, July 21, 2007

Requirments Management

Features tracability table (See page 180 of the textbook)

You get the requirments from people A , B , C

A R01, R02 and Email R01 and R02 to A
B R03, R04 and Email R01 and R02 to B
C R05 and Email R01 and R02 to C

Then you write down your specifications by module X, Y

X A01, A02, A03
Y A04, A05

Distribute the specifiations to the project team.

Having a traceability table helps manage the requirments and specifications.

Source traceability table

Dependency traceability table

Subsystem traceability table

Interface traceability table

Requirements Engineering Tasks

Inception: Have an idea what software is needed. For example, a supermarket needs a POS system

Elicitation: Define the objectives of the system, for example (i) to speed up the checkout process and (ii) faciliate price change

Elaboration: Understand how the system works, for example how do the cashiers interact with the system. The work product of elaboration is an analysis model.

Negotiation: Talk to customers what functions cannot be done.

Specification: build a set of written documents that clearly address what should be developed.

Validation: Examine the specification. This should ideally be done by an independent group of people who do not write up the specification.