Html/Javascript widget

Wednesday, 16 May 2018

a happy moment

Just thinking about one of these important milestones in life when you sincerely start to wonder what it is like beyond keeping up with the joneses. A big part of this realisation can ascribed to the entertainment media which is eternally hell bent on filling our minds with powerful imagery and vivid depictions of successful individuals living enticing lives in stunning locations. 
The studiously designed visuals can't help making us think that we are missing out on something that will make our lives complete, something that will suddenly make our lives meaningful. Or relevant. As if our current life is a waste, whatever it is.
It's only with a great many repeated experiences that you see often enough that the media's portrayed reality doesn't align with our first impression as we saw it in these same situations in adverts and movies. If you somehow manage to achieve those lifestyles, let alone maintain them, you come to the sad realisation that this sort of pleasure is very ephemeral, the thrill is over almost before it began, and you are left feeling that you need something more luxurious to attain that elusive level of satisfaction. As if the eternal search for the unicorn is all that matters. 
Once you're old enough you eventually realise the importance of the role that an "ordinary" life has. Understanding the essential role of the ordinary and the simple entails an honest enjoyment of life for what it is. When your mind isn't preoccupied being anxious and displeased with your less-than-glamorous reality, you can start to appreciate and be thankful for everything that you have, and how good health, freedom and friendship really are the best things in life.  

Monday, 14 May 2018

Structured Analysis and Design Technique (SADT)

Structured analysis and design technique (SADT) is a Douglas T. Ross method for describing systems as a hierarchy of functions by using two types of diagrams: activity models and data models. Structured analysis and design technique (SADT) is a notation for systems and diagrams made up of entities, activities and arrows to relate boxes. SADT can be a functional analysis tool of a given process, using varied tiers of details. It can be used both for defining user's IT needs as well as the description of a business process.

The structured analysis and design technique uses a decomposition with the top-down approach, conducted only in the physical domain from an axiomatic design viewpoint.


Monday, 9 April 2018

Requirements Traceability Matrix

A traceability matrix is a document that correlates and traces business, application, security or any other requirements to their implementation, testing or completion, relating between different system components to keep their status up-to-date according to system completion. It captures all requirements proposed by the client or and their traceability in a single document delivered at the conclusion of the life-cycle. It maps and traces user requirement with test cases. The main purpose of Requirement Traceability Matrix is to make sure that all test cases are covered so that no feature is left out during testing.

The Requirement Traceability Matrix – Parameters include

Requirement ID
Requirement Type and Description
Trace to design specification
Unit test cases
Integration test cases
System test cases
User acceptance test cases
Trace to test script

James Martin's Rapid Application Development

James Martin's Rapid-application development (RAD) is an iterative adaptive approach to rapid development. According to RAD,development adaptive process > planning. Prototypes are often used in addition to or sometimes even in place of design specifications.

RAD is especially well suited for companies good at Agile and that like components or pre-existing classes (API's).

Phases -

Requirements planning phase – Users, managers, and IT staff members discuss and agree on business needs, project scope, constraints, and system requirements.

User design phase – users interact with systems analysts and develop models and prototypes that represent all system processes, inputs, and outputs, typically with a combination of Joint Application Development (JAD) techniques and CASE tools to translate user needs into working models.

Construction phase program and application development, only users can still suggest changes.Unit-integration and system testing.

Cutover phase – data conversion, testing, changeover to the new system and user training are the staples of the last stage of james MArtin's RAD.

Modelling and construction may occur in parallel. The former typically lasts from 60-90 days, while the latter may make use of component reuse, automatic code generation and testing.

Joint Application Design

Joint application design (JAD) is a inherent process of dynamic systems development method (DSDM) for gathering requirements. It is basically a workshop where users and IT professionals meet to define the business requirements for the proposed system. Through JAD workshops the knowledge workers and IT specialists are able to resolve any differences between themselves regarding the new system. The premise is that miscommunications can carry far more serious repercussions if not addressed until later on in the process. In the end, this process will result in a new information system that is feasible and appealing to both the designers and end users.

True to its Agile nature, JAD is most effective in small, clearly focused projects and less effective in large complex projects.

Prototyping - evolutionary process

A software process that has been gaining prominence since the late 80's, software prototyping is a process for developing software that improves upon incomplete versions of the target-program.

Prototyping enables steady feedback from the users early in the process, in addition to being a reliable source of accuracy during the first development stages to determine the viability of deadlines and milestones. It's mostly useful for undefined requirements for projects that need to be executed in a hurry and the application domain isn't very well known at the specification stage.

A prototype allows users to evaluate developers' proposals for the design of the eventual product by actually trying them out, rather than relying on requirements-based descriptions. Interaction design in particular makes heavy use of prototyping with this goal.

The process of prototyping involves the following stages:

1- Identify basic requirements, including input and output information. Non-functional requirements can be set aside for now.

2- Develop initial prototype, with emphasis to user interfaces.

3- Review. The user goes over the prototype and gives feedback.

4- Revise and enhance the prototype. Improvement through feedback. Negotiation about what is within the scope of the contract/product may be necessary. The last 2 steps are repeated for approved changes

Types of prototyping

 Throwaway prototyping or close-ended prototyping is a model will be eventually be discarded rather than worked on to become the final product. After basic requirements gathering is done a simple working model is made to showcase the user's requirements in order for him to form an idea of what the working software will look like. It is also called rapid prototyping. It may include storyboards, animatics or drawings are also non-functional designs that will show how the system will look. As it is, a throwaway prototype is mostly used to validate requirements and obtain new ones.

Evolutionary or breadboarding prototyping consists of constantly refining a prototype until it becomes teh final version. The evolutionary prototype forms the core of the target-system, with improvements and further requirements being built on it.

Sunday, 11 March 2018

Distributed Objects

In distributed systems, components on different platforms can talk with each other over a network. The best known type of distributed system is the client-server model, which forms the base for multi-tier architectures. ALternatives are the broker architecture such as CORBA and Isis' group comunication system, which also hapen to be examples of middleware.

Several technology frameworks support distributed architectures, including .NET, J2EE and CORBA. Middleware is an system layer that supports and simplifies the development and execution of distributed applications, as a buffer between the applications and the network, managing the different components of the distributed system.

Middleware as an intermediator for distributed system.

The basis of a distributed architecture is its transparency, reliability, and availability.


Resource sharing − hardware and software.

Openness − Flexibility for hardware and software from different vendors.

Concurrency − Concurrent processing to enhance performance.

Scalability − Increased throughput by adding new resources.

Fault tolerance − continuous operation after a fault has occurred.


Complexity − more than centralised systems.

Security − More susceptible to external attack.

Manageability − More effort required for system management. Relates the complexity above.

Unpredictability − Unpredictable responses depending on the system organisation and network load.

1- Client-Server

The client-server architecture is the most common distributed system architecture. Major subsystems:

Client − This is the first process that issues a request to the second process: the server.

Server − The second process. Receives the request, carries it out, and sends a reply to the client.

The application is a set of services provided by servers. The servers need not to know about clients, but the clients must know the identity of servers.

There are two models based on the functionality of the client:

     Thin-client model - all the processing and data management by the server. The client only runs the GUI software. Used for legacy systems migrated to client server architectures. Drawback is heavy processing load on both the server and the network.

     Thick/Fat-client model -  server in charge of the data management. The software on the client implements the application logic and the interactions with the system user. It is best when the capabilities of the client system are known before hand.

Problem is its complexity when compared to the thin client model.


1- Separation of responsibilities such as user interface presentation and business logic processing.

2- Reusability of server components and potential for concurrency

3- Design and development of distributed applications made simple.

4- Migraion or integration of existing applications made easy.

5- Effective use of resources when many clients are accessing a high-performance server.


1- Lack of heterogeneous infrastructure to deal with the requirement changes.

2- Security compromised

3- Limited server availability and reliability.

4- Fat clients with presentation and business logic together.

2- Multi-Tier

Multi-Tier is a client–server architecture that physically separates the functions of presentation, application processing and data management. This allows developers to change or add a specific layer, instead of reworking the entire application, enabling the creation of flexible and reusable applications.

The three-tier architecture is the most common instance of the multi-tier model, typically composed of a presentation tier, an application tier, and a data storage tier. It may run on another processor.

     Presentation Tier - the topmost level of the application such as a webpage or a system GUI (graphical user interface), communicating with other tiers. Interaction with the end-user is the primary goal here.

     Application Tier (Business Logic, Logic Tier, or Middle Tier) -manages the application, processes the commands and makes logical decisions, evaluation and calculations, processesing the data between the two surrounding layers.

     Data Tier - information stored and retrieved from the database or file system, for processing and presentation to user. It includes the data persistence mechanisms (database servers, file shares, etc.) and provides API (Application Programming Interface) to the application tier which provides methods of managing the stored data.


1- Better performance and simpler to manage than a thin-client approach.

2- Enhances the reusability and scalability −extra servers are added as demands increase.

3- multi-threading support, reducing network traffic.

4- maintainability and flexibility


More critical server reliability and availability.

3 - Broker Architectural Style 

Middleware architecturecoordinates and enables the communication between servers and clients. Object communication through a middleware system called an object request broker (software bus). Client and the server do not interact directly, but by proxy, which communicates with the mediator-broker. A server provides services by registering and publishing their interfaces with the broker and clients can request the services from the broker statically or dynamically by look-up.

Components of Broker Architectural Style:

     Broker - responsible for coordinating communication, which include forwarding and dispatching the results and exceptions. It can be either an invocation-oriented service, a document or message - oriented broker to which clients send a message. Its functions range from locating a proper server, transmitting requests to sending responses back to clients and providing APIs for clients to request and servers to respond.

     Stub - proxy for the client. Generated at compilation time, it provides additional transparency between them and the client, making  a remote object appear like a local one.

     Skeleton - starts at the service interface compilation on the server side. It is server's proxy. It encapsulates low-level system-specific networking functions and provides high-level APIs to communicate between the server and the broker, receiving the requests, unpacks them, unmarshals the method arguments, calls the suitable service and also marshals the result before sending it back to the client.