Component Testing Myths and Other Needs. Component-Based Software vs. Traditional Programs. Engineering Process for Component-Based Software. Testing Model for Component-Based Software. Testing and Maintenance. Black-Box Testing Foundations. Black-Box Based Testing Techniques. Path Testing. Data Flow Testing. OO Testing. Issues in Testing Software Components. Component-Oriented Test Tools. CBSD might be more complex and unreliable.
This This research used BIP framework. This method has been methodology requires further study and empirical support. It dynamically builds a minimal abstraction of the current 2. Tyagi et al. Finally, experimental evaluations on a robotic various reliability testing methodologies being used to application validate the claims and the feasibility of this analyze component based software development. Software approach as well. CBSD is cost effective but it difficult 2. Moonen et al. In this In component based software development, different paper a reliability model is devised to test CBS systems components may be implemented in more than one which estimates the reliability of every individual programming language and are connected to each other component and then calculates a relative reliability statistics.
Soft computing challenging. In this paper they provide the method for the methods are better than other mathematical models but they analysis across the components in CBSS and this method is a are not enough defined to meet reliability testing combination of program analysis technique and model requirements driven engineering. The method includes the construction of system wide dependence graph which is made up from 2.
In this paper they built a different pre developed components. So choosing the model to reverse engineer homogeneous model consist of suitable component and composing them in well-defined heterogeneous component. The method described in this software architecture is very important in CBSD. In this paper can be implemented in a prototype tool that can be paper they focused on the relationship among components, used to keep track of data flow across components in CBSD. Kaur et al. Each component has its own syntax its benefits.
CBSD embodies reusability, software quality rules that are followed in the code and semantics name, and maintainability. To develop a CBSS we need to have interface and body. The benefits of CBSD are reduction in some off-the-shelf components and then we have to integrate cost, development time and effort, increase in reliability as them in well-defined software architecture.
For CBSD we the components have been tested previously. Efficiency and need to have components with neutral interfaces, so that we flexibility increase as we can add or remove different can easily integrate them with different components and components easily. Structural Based Complexity identify the cross communication among components. Yes, No 10 Performance Performance can be described as latency and response time Yes, No measurement, based on user load conditions. Yes, No 12 Usability Usability is a quality attribute that assesses how easy user interfaces Yes, No are to use 13 Scalability Scalability is a property to raise the capacity mainly, the users of Yes, No deployed system over a period.
Yes, No 18 Reusability Components of the software to be reused in other applications. Yes, No 19 Testability The degree to which a software system or component facilitates Yes, No testing 20 Flexibility Property that indicate if the software is easy to change. Yes, No 21 Dependability Measure of a system's availability, reliability, and its Yes, No maintainability. In order to assure For Quality Assurance different standards are being quality in a CBSD Quality components, efficient framework followed to assure Quality in Component Based Software and simple integration process is needed.
This paper presents Development  . Quality parameters are shown in different methods and techniques used to assess and assure Table I. The area surveyed includes different and stability. Chopra et to estimate different quality aspect of a CBSS.
Quality al. So In future we need  showed concerned about Reliability, Interface and to explore the methodologies to integrate the quality structural complexity, Usability, Scalability, Performance components without affecting their quality and design and also about consistency. Soni and S. Sharma et  T. Lai and K. Kahtan, N. AbuBakar and R. Nordin, "Reviewing the performance, Understandability of software documents, challenges of security features in component based software Interface and structural complexity and Dependability for development models," E-Learning, E-Management and E- quality assurance in CBSD.
Sharma, P. Semwal and S. Sharma, verification, validation and reusability. Youxin, M. Xianghai and Y. Chi,  and I.
Component-Based Software Quality
Kaur International Forum on Information Technology and et al. Bansal and N. Moonen et Vol. Patil, N. M Shivale, S. Joshi and V. Khanna their dependency. Sharma, A. Kalita and H. Li, S. Cheng, L. This mechanism makes the capture of transition dependencies mathematically tractable, even in the case of complex components.
However, this approach does not specify how the input values can be obtained [ 13 , 25 ]. The profile is built based on the domain expert and operational data obtained from similar functional component s. In the same way, the work by [ 7 ] addresses the problem of individual component reliability prediction at the early design stage and the operational data unavailability. Furthermore, the author modifies the operational profile devolved in [ 11 ] by using multiple information sources that can be available at the early design stage, such as the requirements specification document and a simulation technique in order to achieve more accuracy.
The adoption of the work in [ 7 ] as part of the system-level reliability approaches [ 12 , 25 ] demonstrates the need for the prediction of individual component reliability in order to predict the whole system reliability. However, the first-order DTMC does not explicitly reflect the effects of architectural features such as loops and conditional branching in the component reliability prediction [ 12 ]. Moreover, none of these techniques use a fine-grained method that utilizes the explicit requirements specification as the main source at the early design stage to synthesize the behaviour models.
The scenario-based method of Rodrigues et al. In that work, the behaviour model of the component is synthesized from the requirements specification. The requirements are provided in scenarios using message sequence charts MSCs. Then, the states of the behaviour model are mapped to the DTMC to compute the reliability.
However, that work did not consider the influence of loop entry and exit points in the computation, due to the use of the DTMC. The component architectural design is modeled or constructed in the form of a state machine. This state machine can be derived from the code using induction algorithms or from the requirements specification using behaviour synthesis algorithms. Synthesizing a behaviour model or deriving a state machine from requirements specification is the starting point for the proposed technique. For ease of exposition, the proposed technique is depicted as a three-phase process as shown in Fig 1.
Broadly, the requirements specification is the main source utilized by the technique to synthesize the component behaviour model. Finite state machines FSMs are the basic elements that used in the behaviour synthesis. The behaviour model can be used for two purposes: as a simulation of the component behaviour and as a source for obtaining and identifying the elements of a probabilistic dependency graph. The simulation provides an execution log for the component, and the log serves as the runtime observation data required as input to generate operational data for the component.
Finally, the constructed graph which is a component probabilistic dependency graph CPDG is used as input to a tree transversal algorithm which works to compute the component reliability. A dependency graph is selected to represent the component structure and behaviour for two reasons. This aspect is overlooked in most current component reliability techniques. Second, it is typical to use a specific computation algorithm, namely, the tree transversal algorithm, to allow for a tractable solution. The process of synthesizing the component behaviour from scenario specifications as a popular requirements elicitation tool involves three activities: preparing scenarios, translating the component instances in each scenario to FSMs, and merging the FSMs of each component into one state machine model such as the labelled transition system LTS.
In order to define how the behaviour models can be synthesized, this section briefly reviews our previous research work [ 26 ], which is relevant to the synthesis of behaviour models from requirements specification; noting that the technique proposed in this paper is not dependent only on our previous work. Triggered scenario languages provide syntactic constructs for describing the conditional or causal relations between sequences of actions.
Scenarios in a language like live sequence charts, are described in conditional form called universal form and existential. In triggered language, scenarios are described in universal form with existential semantics. This type of modeling provides a good fit with use cases which is the primary form of requirements elicitation. This statement is also conditional in the sense that requesting and obtaining cash is expected to be possible if the user has inserted a valid card and input the correct password.
The last statement in a universal form, but more concise and compact two universal statements combined together. Fig 2 shows the specifications of the ATM system, with Fig 2 a depicting the system constraints which are elicited as domain knowledge, and Fig 2 b illustrating the ATM scenarios using s-TSs. The s-TSs enhance the current triggered scenario languages [ 27 , 28 ] by adding constructs that enable the writing of scenarios in a compact and concise manner in order to enhance the scalability of scenario modeling.
At the early design stage when complete information about the behaviour of a system is not available, there is no option other than to leverage the system constraints and their state variables as basic information sources to enrich system scenarios which are already documented using s-TSs as mentioned previously. Each component instance is annotated independently, depending on its own state variables list. The reason for this independence is that the goal is only to construct the behaviour model of the component not the behaviour of the system that represented through the scenario.
The values of some state variables may be marked as missing due to not having specifications. Thus, these missing values in the annotated scenarios need to be propagated in Step 3 of this phase using a propagation technique similar to the work by [ 29 ] and [ 30 ]. Fig 3 b shows one of the scenarios in Fig 2 after implementing the scenario preparation steps.
Once the scenarios are prepared annotated and propagated , we are ready to synthesize a behaviour model for each component in the system. Thus, each FSM represents the behaviour of the component corresponding to a specific scenario from the set of system scenarios. These FSMs will later be merged in Phase3 to produce a complete behaviour model of the component. In order to convert each component instance within a scenario to FSM, pre-post conditions values and operations incoming and outgoing messages of this component instance will be translated to states and transitions, respectively.
In the final activity in behaviour model construction, we merge the different FSMs of the component by identifying identical terminal and starting states. Two different FSMs will be merged if and only if the terminal state of one is similar to the starting state of the other. The merging transition will be created from a terminal to a start the transition from a start to a terminal is not allowed. The similarity between the states is determined based on the state vector values of the states.
The final output of this phase is the LTS which represents the behaviour of the component. The use of a probabilistic graph is a classical method in software engineering applications. Baah et al. In early reliability prediction there are a number of approaches that use a probabilistic graph. Yacoub et al. However, the nodes in Yacoub et al. In the construction activity, all the elements of the CPDG are defined based on the basic notation and definitions of the CPDG and the synthesized behaviour model of the targeted component.
The synthesis of the behaviour model was already described in relation to the previous phase. The next subsection defines the notations and parameters of the CPDG. Then, the operational data generation activity which provides the data used to assign values to all the CPDG parameters is described. Briefly, the CPDG construction requires the identification of its basic notation and definitions.
dblp: Component-Based Software Quality
RS i is the reliability of a state i it is a probability that indicates that the component will pass the current state correctly fault free. PT ij is the probability of transition from state i to state j, which is the probability that the next state will be executed after the current state the sum of the outgoing transition probabilities from each state to all the other states, including implicitly the failure transition, should be 1.
PT iExit is the probability of transition from state i to exit state. Fig 6 shows an example of how a CPDG can be constructed based on the states and transitions of the behaviour model of a component. The nodes in the CPDG are directly inherited from the states of the behaviour model, whereby all the states in the behaviour model become nodes in the CPDG.
The operational data describe the behaviour of the component quantitatively. The data identify an ordered set of operations that the software component performs along with their associated probabilities. At the early stages of software development, the operational data on a given component may not be available, particularly in the case of newly designed components, and a design time reliability prediction technique must take this uncertainty into consideration.
Baum—Welch is an iterative optimization technique used with HMM to approximate the best transition and observation probabilities. Similarly, the operational data give the frequencies of the transitions among the states which translate the transition probabilities PT ij into the CPDG. The algorithm estimates or computes the component reliability based on the CPDG branches and their relevant parameters. In the CPDG, each path represents consecutive states and transitions.
The computation is based on Eq 3 , which is derived from Eq 1. Eq 1 has been widely used by path-based reliability approaches [ 20 , 34 , 35 , 36 ] at the system level, while in this technique it is adopted at the component level. This adoption is similar to most of the state-based component reliability prediction techniques [ 7 , 11 , 12 ], which reuse a system-level formula at the component level. Each path is iterated until the number of iterations equals the maximum number of expected iterations.
By adopting the formula in [ 35 ] the path reliability can be defined as:. Pr v i is the probability of visiting each state i belonging to the path from the initial state. From the CPDG definitions, the probability of transition to the first state is 1; then Pr v i can be rewritten as:. Algorithm 1 , beginning from the start node, computes the reliability of all the CPDG branches. As shown in Lines 13 and 14, the branch reliability is computed based on Eq 3. At the end of each branch, the reliability value of that branch is stored in an Rtemp variable Line 8.
Using the value of the Rtemp , the component reliability is then computed using Eq 4 Line Eq 4 is a common way to compute path reliability in most path-based reliability approaches [ 20 , 36 , 37 ]. This section presents the evaluation of the proposed technique in terms of applicability checking, sensitivity analysis and comparison evaluation. The applicability checking is directed to reveal whether the prediction of component reliability that is obtained based on the behaviour models synthesized from requirements specification is both possible and meaningful or not and furthermore to generalize the proposed technique.
The results are demonstrated in the context of a real world case study. The sensitivity analysis is designed to show that the proposed technique may respond meaningfully to changes in its parameters, which in turn indicates to the correctness of the technique. The sensitivity analysis is also used to recognize the critical states in a component whose modification has a greater impact on improving the component reliability.
From this perspective the sensitivity analysis can be shown as a decision support tool for evaluating various design alternatives. Finally, the comparative analysis was conducted to investigate the improvement yielded by the proposed technique with respect to the problems of existing techniques that have been discussed in the introduction of this paper.
In the evaluation we used a software component named the avoid-component which is part of the controlling system of a robotic wheelchair system [ 38 ]. The wheelchair software is a component-based system that has been developed by our research group to support research in embedded real-time ERT software engineering and rehabilitation robotics. The robotic wheelchair provides mobility for people with a disability and elderly people who are unable to operate the classical wheelchair system. The behaviour of the robot while in motion is highly constrained by the characteristics of reliability attributes and safety criteria.
The robotic wheelchair consists of a motor power platform that is complete with a detector and certain movers, which depend on the suitability and usability to achieve wheelchair functionality. The most common detectors and movers, such as the infrared detector, sonar, laser, fibre optics and others, are used in the robotic wheelchair system to detect an obstacle and determine the distance. The power driving system is one of the important factors in a robotic wheelchair because the main purpose is to facilitate wheelchair consumer movement, along with other advantages such as driving automatically and avoiding obstacles.
In order to avoid unnecessary complexity, this research focuses mainly on activities that are related to the scenario of obstacle avoidance from the point of view of an avoid-component of a robotic wheelchair. In this scenario, the avoid-component receives a detectObstacle signal; this obstacle maybe on the left side or the right side.
Depending on the position of the obstacle, the system has to activate an obstacleLeft or obstacleRight variable, which is located in a component called Subsumption. As soon as the variable is activated, it has to set a global variable named avoidActive , and then it has to wait 2 mc for a direction change before returning back to the detectObstacle state to repeat all these activities again. Fig 7 a shows the wheelchair system constraints as part of the requirements specification. Based on these constraints and the scenarios of the system, the state variables of the avoid-component shown in Fig 7 b were elicited.
Using system constraints and the state variables of the avoid-component, the scenario of obstacle avoidance shown in Fig 7 c is prepared annotated and propagated. By applying the steps defined previously and based on the prepared scenario of obstacle avoidance, the behaviour model of the avoid-component is constructed. This behaviour model is shown in Fig 8. As an illustration, assume that the failure rate values in the behaviour model are related to the operations that appear in the scenario of obstacle avoidance Fig 7 c for which the values are shown in Table 1.
Similar to the work by [ 7 , 11 ], as described previously, these values were inferred from analogous components with similar operations and input obtained from a domain expert the wheelchair developer. The failure rates and the behaviour model are required in the next phase to prepare the CPDG. To prepare the CPDG of the avoid-component, two steps are needed: constructing the CPDG, and generating the operational data relevant to the component.
The data are used to assign the values of the transition probabilities of the CPDG. The super nodes Entry and Exit are added to represent the instantiation and termination of execution.
Browse more videos
Fig 9 a shows the constructed CPDG of the avoid-component. Based on the states of the behaviour model and the failure rates shown above in Table 1 , the operational data relevant to the avoid-component were obtained. Therefore, each state in the behaviour model is mapped to a state in S, and each transition is mapped to an observation in O. Similar to the work by [ 7 , 11 , 12 ], the domain knowledge and similar function components e. This information is used as a basis to initialize the values of the HMM.
For example, to determine the probability of receiving a signal from the IR sensor, the probability of failure relevant to this operation which is obtained from similar components is used. However, to determine the detail about whether the signal is received from the left or right IR sensor, domain knowledge is used. For instance, assume a domain expert mentioned that the signal comes from the left sensor most of the time.
Therefore, receiving the signal from the left IR sensor can take higher probability than the right. After initializing the values of the HMM using this type of information and executing the Baum—Welch algorithm, the final transition probabilities required in the CPDG are obtained. Fig 9 b shows the constructed CPDG of the avoid-component after assigning the transition probabilities.