Html/Javascript widget

Monday, 9 April 2018

Requirements Traceability Matrix

A traceability matrix is a document that correlates and traces business, application, security or any other requirements to their implementation, testing or completion, relating between different system components to keep their status up-to-date according to system completion. It captures all requirements proposed by the client or and their traceability in a single document delivered at the conclusion of the life-cycle. It maps and traces user requirement with test cases. The main purpose of Requirement Traceability Matrix is to make sure that all test cases are covered so that no feature is left out during testing.

The Requirement Traceability Matrix – Parameters include

Requirement ID
Requirement Type and Description
Trace to design specification
Unit test cases
Integration test cases
System test cases
User acceptance test cases
Trace to test script

James Martin's Rapid Application Development

James Martin's Rapid-application development (RAD) is an iterative adaptive approach to rapid development. According to RAD,development adaptive process > planning. Prototypes are often used in addition to or sometimes even in place of design specifications.

RAD is especially well suited for companies good at Agile and that like components or pre-existing classes (API's).

Phases -

Requirements planning phase – Users, managers, and IT staff members discuss and agree on business needs, project scope, constraints, and system requirements.

User design phase – users interact with systems analysts and develop models and prototypes that represent all system processes, inputs, and outputs, typically with a combination of Joint Application Development (JAD) techniques and CASE tools to translate user needs into working models.

Construction phase program and application development, only users can still suggest changes.Unit-integration and system testing.

Cutover phase – data conversion, testing, changeover to the new system and user training are the staples of the last stage of james MArtin's RAD.

Modelling and construction may occur in parallel. The former typically lasts from 60-90 days, while the latter may make use of component reuse, automatic code generation and testing.

Joint Application Design

Joint application design (JAD) is a inherent process of dynamic systems development method (DSDM) for gathering requirements. It is basically a workshop where users and IT professionals meet to define the business requirements for the proposed system. Through JAD workshops the knowledge workers and IT specialists are able to resolve any differences between themselves regarding the new system. The premise is that miscommunications can carry far more serious repercussions if not addressed until later on in the process. In the end, this process will result in a new information system that is feasible and appealing to both the designers and end users.

True to its Agile nature, JAD is most effective in small, clearly focused projects and less effective in large complex projects.

Prototyping - evolutionary process

A software process that has been gaining prominence since the late 80's, software prototyping is a process for developing software that improves upon incomplete versions of the target-program.

Prototyping enables steady feedback from the users early in the process, in addition to being a reliable source of accuracy during the first development stages to determine the viability of deadlines and milestones. It's mostly useful for undefined requirements for projects that need to be executed in a hurry and the application domain isn't very well known at the specification stage.

A prototype allows users to evaluate developers' proposals for the design of the eventual product by actually trying them out, rather than relying on requirements-based descriptions. Interaction design in particular makes heavy use of prototyping with this goal.

The process of prototyping involves the following stages:

1- Identify basic requirements, including input and output information. Non-functional requirements can be set aside for now.

2- Develop initial prototype, with emphasis to user interfaces.

3- Review. The user goes over the prototype and gives feedback.

4- Revise and enhance the prototype. Improvement through feedback. Negotiation about what is within the scope of the contract/product may be necessary. The last 2 steps are repeated for approved changes

Types of prototyping

 Throwaway prototyping or close-ended prototyping is a model will be eventually be discarded rather than worked on to become the final product. After basic requirements gathering is done a simple working model is made to showcase the user's requirements in order for him to form an idea of what the working software will look like. It is also called rapid prototyping. It may include storyboards, animatics or drawings are also non-functional designs that will show how the system will look. As it is, a throwaway prototype is mostly used to validate requirements and obtain new ones.

Evolutionary or breadboarding prototyping consists of constantly refining a prototype until it becomes teh final version. The evolutionary prototype forms the core of the target-system, with improvements and further requirements being built on it.

Sunday, 11 March 2018

Distributed Objects

In distributed systems, components on different platforms can talk with each other over a network. The best known type of distributed system is the client-server model, which forms the base for multi-tier architectures. ALternatives are the broker architecture such as CORBA and Isis' group comunication system, which also hapen to be examples of middleware.

Several technology frameworks support distributed architectures, including .NET, J2EE and CORBA. Middleware is an system layer that supports and simplifies the development and execution of distributed applications, as a buffer between the applications and the network, managing the different components of the distributed system.

Middleware as an intermediator for distributed system.

The basis of a distributed architecture is its transparency, reliability, and availability.


Resource sharing − hardware and software.

Openness − Flexibility for hardware and software from different vendors.

Concurrency − Concurrent processing to enhance performance.

Scalability − Increased throughput by adding new resources.

Fault tolerance − continuous operation after a fault has occurred.


Complexity − more than centralised systems.

Security − More susceptible to external attack.

Manageability − More effort required for system management. Relates the complexity above.

Unpredictability − Unpredictable responses depending on the system organisation and network load.

1- Client-Server

The client-server architecture is the most common distributed system architecture. Major subsystems:

Client − This is the first process that issues a request to the second process: the server.

Server − The second process. Receives the request, carries it out, and sends a reply to the client.

The application is a set of services provided by servers. The servers need not to know about clients, but the clients must know the identity of servers.

There are two models based on the functionality of the client:

     Thin-client model - all the processing and data management by the server. The client only runs the GUI software. Used for legacy systems migrated to client server architectures. Drawback is heavy processing load on both the server and the network.

     Thick/Fat-client model -  server in charge of the data management. The software on the client implements the application logic and the interactions with the system user. It is best when the capabilities of the client system are known before hand.

Problem is its complexity when compared to the thin client model.


1- Separation of responsibilities such as user interface presentation and business logic processing.

2- Reusability of server components and potential for concurrency

3- Design and development of distributed applications made simple.

4- Migraion or integration of existing applications made easy.

5- Effective use of resources when many clients are accessing a high-performance server.


1- Lack of heterogeneous infrastructure to deal with the requirement changes.

2- Security compromised

3- Limited server availability and reliability.

4- Fat clients with presentation and business logic together.

2- Multi-Tier

Multi-Tier is a client–server architecture that physically separates the functions of presentation, application processing and data management. This allows developers to change or add a specific layer, instead of reworking the entire application, enabling the creation of flexible and reusable applications.

The three-tier architecture is the most common instance of the multi-tier model, typically composed of a presentation tier, an application tier, and a data storage tier. It may run on another processor.

     Presentation Tier - the topmost level of the application such as a webpage or a system GUI (graphical user interface), communicating with other tiers. Interaction with the end-user is the primary goal here.

     Application Tier (Business Logic, Logic Tier, or Middle Tier) -manages the application, processes the commands and makes logical decisions, evaluation and calculations, processesing the data between the two surrounding layers.

     Data Tier - information stored and retrieved from the database or file system, for processing and presentation to user. It includes the data persistence mechanisms (database servers, file shares, etc.) and provides API (Application Programming Interface) to the application tier which provides methods of managing the stored data.


1- Better performance and simpler to manage than a thin-client approach.

2- Enhances the reusability and scalability −extra servers are added as demands increase.

3- multi-threading support, reducing network traffic.

4- maintainability and flexibility


More critical server reliability and availability.

3 - Broker Architectural Style 

Middleware architecturecoordinates and enables the communication between servers and clients. Object communication through a middleware system called an object request broker (software bus). Client and the server do not interact directly, but by proxy, which communicates with the mediator-broker. A server provides services by registering and publishing their interfaces with the broker and clients can request the services from the broker statically or dynamically by look-up.

Components of Broker Architectural Style:

     Broker - responsible for coordinating communication, which include forwarding and dispatching the results and exceptions. It can be either an invocation-oriented service, a document or message - oriented broker to which clients send a message. Its functions range from locating a proper server, transmitting requests to sending responses back to clients and providing APIs for clients to request and servers to respond.

     Stub - proxy for the client. Generated at compilation time, it provides additional transparency between them and the client, making  a remote object appear like a local one.

     Skeleton - starts at the service interface compilation on the server side. It is server's proxy. It encapsulates low-level system-specific networking functions and provides high-level APIs to communicate between the server and the broker, receiving the requests, unpacks them, unmarshals the method arguments, calls the suitable service and also marshals the result before sending it back to the client.

Thursday, 1 March 2018

buffer underrun

     In computing, buffer underrun or buffer underflow is a common problem when burning data into a CD. It happens when a buffer used to communicate between two devices or processes is fed with data at a lower speed than the data is being read from it, resulting in a process being fed data at a lower speed than the data is being read from it. Recording data to a CD-R is a real-time process that must run nonstop without interruption of the signal. This requires the program or device reading from the buffer to pause its processing while the buffer refills.

Tuesday, 20 February 2018

Types of operating systems

Types of operating systems:

a- Real-time- a multitasking operating system that executes real-time applications through specialised scheduling algorithms to determine their performance, aiming at quick and predictable response to events. They incorporate traits of both time sharing and event-driven properties. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.

b- Multi-user - allows multiple users to access a computer system at the same time e.g.: time-sharing systems and Internet servers. Single-user operating systems can only have one user at a time although they can still be multitask systems,meaning multiple programs running concurrently.

c- Multi-tasking vs. single-tasking
The former allows more than one program to be running at the same time, while the latter has only one running program. There are 2 types of multi-tasking system: pre-emptive (the operating system slices the CPU time and dedicates one slot to each of the programs) and cooperative (each process gives time to the other processes). 

d- Distributed - a group of independent computers behave like one single entity, similar to a network cluster. The processing is distributed across the participating machines.

e- Embedded - for use in embedded computer systems (small machines like PDAs). As they run on limited resources, their design is usually efficient considering the reduced autonomy with which they operate.