Html/Javascript widget

Friday, 29 December 2017

Function Point

A function point measures how business-worthy an information system component is to the end user. As a single unit, the cost of a function point is based on previous projects. Although there is no widely recognised method in the sizing result, there have been many approaches to bring it closer to a standardising convention

As of 2013, these are −

ISO Standards
COSMIC − ISO/IEC 19761:2011 Software engineering. A functional size measurement method.

FiSMA − ISO/IEC 29881:2008 Information technology - Software and systems engineering - FiSMA 1.1 functional size measurement method.

IFPUG − ISO/IEC 20926:2009 Software and systems engineering - Software measurement - IFPUG functional size measurement method.

Mark-II − ISO/IEC 20968:2002 Software engineering - Ml II Function Point Analysis - Counting Practices Manual.

NESMA − ISO/IEC 24570:2005 Software engineering - NESMA function size measurement method version 2.1 - Definitions and counting guidelines for the application of Function Point Analysis.


Object Management Group (OMG), an open membership and not-for-profit computer industry standards consortium, has adopted the Automated Function Point (AFP) specification led by the Consortium for IT Software Quality.

Function Point Analysis (FPA) technique quantifies the functions within software that are meaningful to the users. The functions are based on the requirements specification.

Sunday, 12 November 2017

TCL commands

Transaction Control Language(TCL) commands manage transactions in database, usually the changes rbought about by DML statements (update, delete, insert).

Commit - used to permanently save any transaction into the database.

Rollback - undoes all changes, returning to the last committed stage. It can also be used in conjunction with the savepoint command when regression to an earlier state is desirable if savepoints were used rather than commit.

Savepoint - temporarily saves a transaction, thus allowing for a rollback operation to reverse back to this saved state. Savepoint is the only TCL command that needs to be named in order to be identified by a rollback operation.

The Weasel War Dance

In colloquial language, the weasel war dance is a set of moves done by a ferret to indicate playful behaviour. It consists of a frenzied series of hops sideways and backwards, often accompanied by an arched back, and a frizzed-out tail. Ferrets exhibit a pointed lack of spatial awareness when in this state,  often bumping into or falling over objects and furniture.  The dance includes a clucking vocalization, known as "dooking". 

Tuesday, 7 November 2017

Database Tuning

Database tuning is the process of fine tuning either the parameters of a database installation or the influencing properties of a db application in order to improve performance. It's often recommended with huge database systems due to the  complexity and amount of data to be handled.

Tuning is best done by highly specialised professionals even though it's a costly process with hardly noticeable results. A more cost-effective solution with similar effects can be achieved by hardware implementation. This restricts the need for tuning to a few areas, e.g.: high-end applications. Another option includes the optimisation of the data model by normalising its tables e.g.: using atomic fields and eliminating transitive dependencies.

Monday, 2 October 2017

critical path method (CPM)

The critical path method (CPM) is a progressive project management technique to separate critical and non-critical tasks in order to avoid bottlenecks and missed deadlines due to lack of priorisation. It's best suited for projects with a diverse range of activities in a way that enables their solving in an ordered manner. ideally, a flowchart is used for displaying how the tasks relate to each other (e.g.: how the output of an activity is the input of a subsequent process). It's also tops to determine beforehand the expected completion time for each task.

An activities model designed for project management, common elements of this model include time for the completion of each activity, the dependencies therein, milestones and deliverables. It makes use of the presentation capabilities of an operation flowchart, the outcomes pictured as knots while the accompanying relationships are represented by arrows. For this model to make sense, all individual project operations should be plotted with their respective durations.
Using the dynamics of the operation-duration relationships, CPM calculates the longest path of planned activities to logical end points or to the end of the project, and the earliest and latest that each activity can start and finish without delaying the whole project, as a means of providing a safe margin for work within a given deadline. This procedure distinguishes activities between "critical" and which have "total float", meaning, allowed some delay without compromising the project overall. This network of activities and their matching duration defines the shortest possible time to carry a project through to completion, with total float being unused time within the critical path.

Monday, 11 September 2017

Object-oriented analysis

Object–Oriented Analysis means gathering software requirements according and identifying the elements of the problem domain. The requirements and domain problem are created with an object model in mind, which ultimately guides the whole process through to completion.

Although the object-oriented approach started as a programming methodology since 1960 with Simula, it wasn't until later that this became a major software design paradigm with Xerox Park and an published article by Grady Booch in 1980 that emphasised the features of an object-oriented mindset for software developers.
Codd eventually expanded upon this study to create what is known today as object-oriented methods.


In object-oriented approach, requirements are based on objects that the system interacts with. Grady Booch has defined OOA as a method for identifying requirements from the perspective of the classes and objects found in the vocabulary of the problem domain.

The primary tasks in object-oriented analysis are:

-Identifying objects;
-Organising them through some standard model diagram;
-Defining their attributes and methods;
-describing they interact with user, external agents and each other

In OOD, concepts in the analysis model are technology−independent so they can modelled into classes, constraints and interfaces to provide a suitable solution to the domain problem. The implementation includes restructuring of the class its associations.
Grady Booch has defined object-oriented design as "a method of design encompassing the process of object-oriented decomposition and a notation for depicting both logical and physical as well as static and dynamic models of the system under design".
Object-oriented programming uses these advantages to achieve great gains suc has modularity and reusability.

According to Grady Booch "object–oriented programming is a method of implementation in which programs are organized as cooperative collections of objects, each of which represents an instance of some class, and whose classes are all members of a hierarchy of classes united via inheritance relationships".

Sunday, 27 August 2017

Unified Process

Unified Process is one of the most important software industry patterns as of late. This software process was spearheaded by 3 experts in object-oriented analysis in the 1990's: Jacbson, Booch and Rumbaugh). This is the first ever model to be designed after the UML notation. It started to find wide acceptation as a best practice for market ROI. Among the UP features one could quote:
-Clear and precise instructions;
-promotes accountability;
-activities that stress out input and output artefacts
-defines the dependency relationships among activities;
-comprises a well understood life cycle model;
-emphasises the use of the right procedures with the available resources;
-strong correlation with UML.

As a framework, UP is easily adapted to a variety of processes, encompassing the needs of different businesses. Its main features are:

1- Use case-driven- This is process understood from the user's viewpoint, without touching on implementation details. This means a comprehensive collection of all functional requirements that the proposed system has to include. Non-functional requirements might be noted along matching use cases, while extra requirements are kept in a different document.

2- Archiecture-centred - This implies gathering the requirements collected during the use cases and think of them as classes organised in components with defined roles within a system. Archiecture might be thought of as the information structure as well as the likely operations of said system. The system architecture starts with the user's viewpoints through use cases influenced by implementation factors.

3- Incremental iterations - At each iteration, relevant use cases are analysed according to the chosen architecture. The artefact resulting at the end of every iteration is a system module or an executable version.  The next stage of the iteration involves the next system component to be implemented provided that the current one meets the user's expectations.

4- focus on risk- This means that the most critical use cases are dealt with early in order to solve the most difficult problems first. The highest risk requirements or use cases are usually the ones most likely to be unpredictable in their interaction with the remainder of the components. Thus, understanding them first emables is important to ensure tighter system cohesion.

UP stages

1- Inception: This stage seeks to establish a broad picture of the system. Here the main priorities are limited to requirements, conceptual models and high level use cases. A development plan is also drawn up at this point in order to anticipate the amount of resources needed into the proposed project. The use cases will be incorporated into iterative cycles. Tests and implementation might occur during this stage in case an early prototype is deemed necessary to avoid greater risks, but otherwise these are kept to a minimum.

2- elaboration- Use cases are expanded upon in order to plot a basic architectural model. This means that the use cases are given more details to decide which artifacts will serve as the input/output of the incoming iterations. The conceptual model is revised and gives rise to the logical and physical design of the intended system.

3- construction- With the basic system architecture established, the first release of the software product is the aim of this stage, which is almost entirely dedicated to coding and testing. At this point a basic should have been reached between managers and users about the intended system.

4- transition - The system is deployed to the user's work environment. Usually data transfer from the former system takes place along with the obligatory training course. Any discrepancy picked by the end users are reported to the developers so they can work on the necessary improvements. It's still possible for requirements and code to undergo some minor revisions.




Thursday, 17 August 2017

Linked List

Introduction to linked list.

A linked list is a data structure that looks a lot like a regular list, except that it's a sequence of nodes. Each node comprises of two parts: data field and data reference. The latter is what defines nodes in a linked list as they're needed to point to the next node in the sequence. Without the data reference, the elements of a linked list wouldn't be bound at all; they'd be only loose entities without any connecting feature to relate them. A head pointer is used to track the first element in the linked list, always pointing to it.

The linked list data structure is tops for insertion or removal at any position in the list. However finding elements according to some specified criterium is made more difficult when compared to other compound data structures such as arrays and lists because it requires going through the whole list in order to find the desired item.

We can model a node of the linked list using a structure as follows:

typedef struct node{
    int data;
    struct node* next;
}

It should be noted that a linked list element is basically a struct, an object-esque entity in C which shares some similarities with a bona fide object from OO programming. For instance, a struct can store other values using common data types such as ints and chars, which can be referred to later on down the project.  Also, notice how data stores information while the next pointer holds the address of the next node.


First we declare a head pointer that always points to the first node of the list.

To add a node at the beginning of list we need to create a new node. We will need to create a new node each time we want to insert a new node into the list so we can develop a function that creates a new node and return it.

node* create(int data,node* next)
{
    node* new_node = (node*)malloc(sizeof(node));
    if(new_node == NULL)
    {
        printf("Error creating a new node.\n");
        exit(0);
    }
    new_node->data = data;
    new_node->next = next;

    return new_node;
}

Then we need to point the next pointer of the new node to the head pointer and point the head pointer to the new node.

node* prepend(node* head,int data)
{
    node* new_node = create(data,head);
    head = new_node;
    return head;
}

TRaversing the linked list

To traverse the linked list, we start from node 1, and move to the next node until we reach a NULL pointer.


typedef void (*callback)(node* data);
The following is the traverse() function:

void traverse(node* head,callback f)
{
    node* cursor = head;
    while(cursor != NULL)
    {
        f(cursor);
        cursor = cursor->next;
    }
}

Sunday, 4 June 2017

A Short Definition for weighted graph

A weighted graph is a graph that ssigns a numerical weight to each branch. As each branch is made up of vertices (or nodes) and edges, they are assigned weigts if they have labelled values: a vertex-weighted graph has weights on its vertices and an edge-weighted graph has weights on its edges.

obs.: Weight is just a numerical value assigned as a label to a vertex or edge of a graph. The weight of a subgraph is the sum of the weights of the vertices or edges within that subgraph.

Quick Introduction to Pointers

A pointer is a variable containing the address of another variable.

Take for instance the declared variable variable1 in C:

int Variable1;
, store the number ‘96’ in it with

Variable1=96;
and print it out with

printf("%d",Variable1);

Instead of referring to this data store by name (in this instance, variable1), it can be done so by its computer memory address, which can be in a pointer, which is another variable in C. Using the operator '&' means "to take the adress of" while the operator * means to give the pointer whatever is in said address. A pointer to the code above could look like this:

Pointer1=&Variable1;

This stores the address of variable1 in the variable Pointer1.

To take the value, we would use *:

Variable2=*Pointer1;

which would have the same effect as

Variable2=Variable1;

or print it out with

printf("%d",*Pointer1);

So far it should be clear that for pointers we need to use & to take the memory address of a variable while * takes hold of its value.

A pointer is created as below:

int *Pointer1;

The reason that the symbol * comes after the type when delcaring pointers is because it's necessary to specify the type that the pointer has to point to. An int *pointer points only to integer. A char *var points only to chars and so on.

Pointers are necessary for dynamic memory location, data structures and efficient handling of large amounts of data. Without pointers, you'd have to allocate all the program data globally or in functions or the equivalent, resulting in a lot of waste for variables that are not being used but are still taking up memory space.

Interpreter and Compiler

A compiler is a computer program responsible for translating a source code into a language form that can be directly run by the computer. There's though a minor difference between translator and compiler: a translator converts a program from its formal source code to a specific target language. Compilers are a special sort of translators which take program code often written in a high level language to run as machine code or intermediate code (e.g.: Bytecode, .NET-Code etc). This difference between translator and compiler isn't always pronounced in all cases. Programming languages such as C and C++ are compiler-based as they generate an exe file (if you're using windows) after being successfully compiled. An interpreter, on the other hand, doesn't convert source code into a portable format that can be run in a specific platform or system. Rather, an interpreter reads a code line by line and produces an output on a client or server platform. Interpreters need a specific environment setup in order to work properly. Examples of interpreted languages include Javascript and PHP.

InterpreterCompiler
Translates code line by line.Runs through whole program to generate an .exe file.
Reads faster but executes slower.It takes more time to scan code but afterwards execution is faster than with an interpreter. 
No intermediate code generated.Intermediate object code that calls for linking, which in turn uses up more memory.
Translation stops at first error.It throws an error message after scanning the entire code.
Input in the form of Single statements The whole program is the input in compiled languages.

Monday, 17 April 2017

Webfonts

Webfonts are fonts that don't come installed on the user's computer. Rather, they're downloaded by the browser on demand to view the desired webpage text. The only setback of using of webfonts is the inevitable load time it adds to render the page. If a browser fails to render a certain website text, it will have to use a font from a list of web safe fonts, which wasn't the original font used by the web designer. The Font Formats are:

TrueType Fonts (TTF) - simply put, TTF are the most commonly used fonts for most systems, encompassing all the well-known types such as times new roman, arial etc.

OpenType Fonts (OTF) - Their main feature is being scalable according to text size.

The Web Open Font Format (WOFF) - Its Default use is reserved for webpages. It was released in 2009 and now being included in the W3c guidelines. It's built to support font distribution from a server to a client over a network with bandwidth constraints.

The Web Open Font Format (WOFF 2.0) - TrueType/OpenType font with a better compression ration than WOFF 1.0.

SVG Fonts/Shapes - render glyphs when displaying text.

Embedded OpenType Fonts (EOT) -OpenType fonts in a compacted format that can be embedded on web pages.

Monday, 6 March 2017

Codd's 12 rules are a set of rules created by Edgar F. Cood known for defining what should be expected from a database management system (DBMS) in order to be regarded as truly relational.
Codd crafted these rules to prevent the original concept of relational databases from being perceived as diluted, as database vendors scrambled in the early 1980s to repackage existing products with a relational veneer. Even if such repackaged non-relational products eventually gave way to SQL DBMSs, no popular relational DBMSs could be considered fully relational, be it by Codd’s twelve rules or by the more formal definitions in his papers and books. Some rules are deemed controversial, especially rule 3, because of the debate on three-valued logic. These rules can be applied on any database system that manages stored data using only its relational capabilities. This is a foundation rule, which acts as a base for all the other rules.

Rule 1: Information Rule
The data stored in a database, may it be user data or metadata, must be a value of some table cell. Everything in a database must be stored in a table format.

Rule 2: Guaranteed Access Rule
Every single data element (value) is guaranteed to be accessible logically with a combination of table-name, primary-key (row value), and attribute-name (column value). No other means, such as pointers, can be used to access data.

Rule 3: Systematic Treatment of NULL Values
The NULL values in a database must be given a systematic and uniform treatment. This is a very important rule because a NULL can be interpreted as one the following − data is missing, data is not known, or data is not applicable.

Rule 4: Active Online Catalog
The structure description of the entire database must be stored in an online catalog, known as data dictionary, which can be accessed by authorized users. Users can use the same query language to access the catalog which they use to access the database itself.

Rule 5: Comprehensive Data Sub-Language Rule
A database can only be accessed using a language having linear syntax that supports data definition, data manipulation, and transaction management operations. This language can be used directly or by means of some application. If the database allows access to data without any help of this language, then it is considered as a violation.

Rule 6: View Updating Rule
All the views of a database, which can theoretically be updated, must also be updatable by the system.

Rule 7: High-Level Insert, Update, and Delete Rule
A database must support high-level insertion, updation, and deletion. This must not be limited to a single row, that is, it must also support union, intersection and minus operations to yield sets of data records.

Rule 8: Physical Data Independence
The data stored in a database must be independent of the applications that access the database. Any change in the physical structure of a database must not have any impact on how the data is being accessed by external applications.

Rule 9: Logical Data Independence
The logical data in a database must be independent of its user’s view (application). Any change in logical data must not affect the applications using it. For example, if two tables are merged or one is split into two different tables, there should be no impact or change on the user application. This is one of the most difficult rule to apply.

Rule 10: Integrity Independence
A database must be independent of the application that uses it. All its integrity constraints can be independently modified without the need of any change in the application. This rule makes a database independent of the front-end application and its interface.

Rule 11: Distribution Independence
The end-user must not be able to see that the data is distributed over various locations. Users should always get the impression that the data is located at one site only.

Rule 12: Non-Subversion Rule
If a system has an interface that provides access to low-level records, then the interface must not be able to subvert the system and bypass security and integrity constraints.

Impedance mismatch

The Object-relational Impedance Mismatch is a set of problems in application development when an object from an object-oriented paradigm is to be saved in a relational databse particularly because objects or class definitions must be mapped to database tables defined by relational schema.

The object-oriented paradigm is based on proven software engineering principles. The relational paradigm, however, is based on proven mathematical principles. Because the underlying paradigms are different the two technologies do not work together seamlessly.

Thursday, 2 March 2017

Databases

Bringing our focus to today's technology one thing immediately springs to mind: we have to deal with an inordinate amount of data like nowhere else in history. Nearly everything that we produces a measure of easily verifiable data, whether it is at work through productive metrics or as consumers of products and services that can tallied, allowing this very data to be tabulated and used for noticing patterns and trends over a period of time. Data is the smallest unit of meaning, while information is compiled data that serves a certain purpose, while knowledge is information that can be used to gain an end within a context.

In short, Data is the representation of "facts" or "observations" whereas information refers to the meaning thereof (according to some interpretation). Knowledge, on the other hand, refers to the ability to use information to achieve intended ends. The transformation of data into information and further into knowledge depends largely on advanced systems capable of doing this processing at the same time that it allows human analysts to check the data and make sense of it for future decision-making processes. These systems are what we call databases.

Database systems are nowadays an essential part of our life in modern society, causing all of us to be regular users of at least one major database throughout the course of our existence, be it library systems, bank transactions, grocery store purchases, hotel/airline reservations.

Traditional database applications are built to rely heavily on rigidly-structured textual and numeric data. As database technology continues to make forays into unknown territory to reach out to a larger number of users, it becomes clear that a new db system for analysing data should suffice to glean information from data which structured data applications can't handle. Hence nowadays we have multimedia databases and geographic databases (involving maps and satellite images).

Keeping a large amount of data and conducting regular and agile queries on them calls for a system specialised for just this: the proper handling of massive information. Thence we need a dbms (database management system) to refer to just about any collection of related data, which also have the following properties:

definition: specifying data types (and other constraints to which the data must conform) and data organization
construction: the process of storing the data on some medium (e.g., magnetic disk) that is controlled by the DBMS
manipulation: querying, updating and report generation
sharing: allowing multiple users and programs to access the database "simultaneously"
system protection: preventing database from becoming corrupted when hardware or software failures occur
security protection: guarding db against malicious or unathorised access.

Mini-World

The first step in designinig a database is to understand the business context and all the relevant interactions and transactions that take place within the business environment. This set of interactions and dynamics pertinent to the business reality can be thought of as the mini-world, a scaled-down version of a business setting that serves as the main framework for the design of a database.

A mini-world can be thought of as the model of a data base. A miniworld is a portion of the real world that can be represented in a database domain as the setting of a business activity. Examples of mini-world include a medical office, a retail business, an accounting firm etc. When we need to retrieve information from a mini-world, we handle information in a database modelled after it. When you go to a medical appointment, the receptionist will look up your name or id number in this model, a database with information of all patients. If the model keeps loyal to its miniworld, it should confirm the appointment. By the same token, we can also query the data to find information that is relevant to the business, like finding out the most frequent transactions, highest-demand products, most punctual payers etc. A miniworld can be comprised of a whole business or parts thereof (e.g.: a single department or a branch of a large corporation).