Html/Javascript widget

Sunday, 4 June 2017

A Short Definition for weighted graph

A weighted graph is a graph that ssigns a numerical weight to each branch. As each branch is made up of vertices (or nodes) and edges, they are assigned weigts if they have labelled values: a vertex-weighted graph has weights on its vertices and an edge-weighted graph has weights on its edges.

obs.: Weight is just a numerical value assigned as a label to a vertex or edge of a graph. The weight of a subgraph is the sum of the weights of the vertices or edges within that subgraph.

Quick Introduction to Pointers

A pointer is a variable containing the address of another variable.

Take for instance the declared variable variable1 in C:

int Variable1;
, store the number ‘96’ in it with

Variable1=96;
and print it out with

printf("%d",Variable1);

Instead of referring to this data store by name (in this instance, variable1), it can be done so by its computer memory address, which can be in a pointer, which is another variable in C. Using the operator '&' means "to take the adress of" while the operator * means to give the pointer whatever is in said address. A pointer to the code above could look like this:

Pointer1=&Variable1;

This stores the address of variable1 in the variable Pointer1.

To take the value, we would use *:

Variable2=*Pointer1;

which would have the same effect as

Variable2=Variable1;

or print it out with

printf("%d",*Pointer1);

So far it should be clear that for pointers we need to use & to take the memory address of a variable while * takes hold of its value.

A pointer is created as below:

int *Pointer1;

The reason that the symbol * comes after the type when delcaring pointers is because it's necessary to specify the type that the pointer has to point to. An int *pointer points only to integer. A char *var points only to chars and so on.

Pointers are necessary for dynamic memory location, data structures and efficient handling of large amounts of data. Without pointers, you'd have to allocate all the program data globally or in functions or the equivalent, resulting in a lot of waste for variables that are not being used but are still taking up memory space.

Interpreter and Compiler

A compiler is a computer program responsible for translating a source code into a language form that can be directly run by the computer. There's though a minor difference between translator and compiler: a translator converts a program from its formal source code to a specific target language. Compilers are a special sort of translators which take program code often written in a high level language to run as machine code or intermediate code (e.g.: Bytecode, .NET-Code etc). This difference between translator and compiler isn't always pronounced in all cases. Programming languages such as C and C++ are compiler-based as they generate an exe file (if you're using windows) after being successfully compiled. An interpreter, on the other hand, doesn't convert source code into a portable format that can be run in a specific platform or system. Rather, an interpreter reads a code line by line and produces an output on a client or server platform. Interpreters need a specific environment setup in order to work properly. Examples of interpreted languages include Javascript and PHP.

InterpreterCompiler
Translates code line by line.Runs through whole program to generate an .exe file.
Reads faster but executes slower.It takes more time to scan code but afterwards execution is faster than with an interpreter. 
No intermediate code generated.Intermediate object code that calls for linking, which in turn uses up more memory.
Translation stops at first error.It throws an error message after scanning the entire code.
Input in the form of Single statements The whole program is the input in compiled languages.

Monday, 17 April 2017

Webfonts

Webfonts are fonts that don't come installed on the user's computer. Rather, they're downloaded by the browser on demand to view the desired webpage text. The only setback of using of webfonts is the inevitable load time it adds to render the page. If a browser fails to render a certain website text, it will have to use a font from a list of web safe fonts, which wasn't the original font used by the web designer. The Font Formats are:

TrueType Fonts (TTF) - simply put, TTF are the most commonly used fonts for most systems, encompassing all the well-known types such as times new roman, arial etc.

OpenType Fonts (OTF) - Their main feature is being scalable according to text size.

The Web Open Font Format (WOFF) - Its Default use is reserved for webpages. It was released in 2009 and now being included in the W3c guidelines. It's built to support font distribution from a server to a client over a network with bandwidth constraints.

The Web Open Font Format (WOFF 2.0) - TrueType/OpenType font with a better compression ration than WOFF 1.0.

SVG Fonts/Shapes - render glyphs when displaying text.

Embedded OpenType Fonts (EOT) -OpenType fonts in a compacted format that can be embedded on web pages.

Monday, 6 March 2017

Codd's 12 rules are a set of rules created by Edgar F. Cood known for defining what should be expected from a database management system (DBMS) in order to be regarded as truly relational.
Codd crafted these rules to prevent the original concept of relational databases from being perceived as diluted, as database vendors scrambled in the early 1980s to repackage existing products with a relational veneer. Even if such repackaged non-relational products eventually gave way to SQL DBMSs, no popular relational DBMSs could be considered fully relational, be it by Codd’s twelve rules or by the more formal definitions in his papers and books. Some rules are deemed controversial, especially rule 3, because of the debate on three-valued logic. These rules can be applied on any database system that manages stored data using only its relational capabilities. This is a foundation rule, which acts as a base for all the other rules.

Rule 1: Information Rule
The data stored in a database, may it be user data or metadata, must be a value of some table cell. Everything in a database must be stored in a table format.

Rule 2: Guaranteed Access Rule
Every single data element (value) is guaranteed to be accessible logically with a combination of table-name, primary-key (row value), and attribute-name (column value). No other means, such as pointers, can be used to access data.

Rule 3: Systematic Treatment of NULL Values
The NULL values in a database must be given a systematic and uniform treatment. This is a very important rule because a NULL can be interpreted as one the following − data is missing, data is not known, or data is not applicable.

Rule 4: Active Online Catalog
The structure description of the entire database must be stored in an online catalog, known as data dictionary, which can be accessed by authorized users. Users can use the same query language to access the catalog which they use to access the database itself.

Rule 5: Comprehensive Data Sub-Language Rule
A database can only be accessed using a language having linear syntax that supports data definition, data manipulation, and transaction management operations. This language can be used directly or by means of some application. If the database allows access to data without any help of this language, then it is considered as a violation.

Rule 6: View Updating Rule
All the views of a database, which can theoretically be updated, must also be updatable by the system.

Rule 7: High-Level Insert, Update, and Delete Rule
A database must support high-level insertion, updation, and deletion. This must not be limited to a single row, that is, it must also support union, intersection and minus operations to yield sets of data records.

Rule 8: Physical Data Independence
The data stored in a database must be independent of the applications that access the database. Any change in the physical structure of a database must not have any impact on how the data is being accessed by external applications.

Rule 9: Logical Data Independence
The logical data in a database must be independent of its user’s view (application). Any change in logical data must not affect the applications using it. For example, if two tables are merged or one is split into two different tables, there should be no impact or change on the user application. This is one of the most difficult rule to apply.

Rule 10: Integrity Independence
A database must be independent of the application that uses it. All its integrity constraints can be independently modified without the need of any change in the application. This rule makes a database independent of the front-end application and its interface.

Rule 11: Distribution Independence
The end-user must not be able to see that the data is distributed over various locations. Users should always get the impression that the data is located at one site only.

Rule 12: Non-Subversion Rule
If a system has an interface that provides access to low-level records, then the interface must not be able to subvert the system and bypass security and integrity constraints.

Impedance mismatch

The Object-relational Impedance Mismatch is a set of problems in application development when an object from an object-oriented paradigm is to be saved in a relational databse particularly because objects or class definitions must be mapped to database tables defined by relational schema.

The object-oriented paradigm is based on proven software engineering principles. The relational paradigm, however, is based on proven mathematical principles. Because the underlying paradigms are different the two technologies do not work together seamlessly.

Thursday, 2 March 2017

Databases

Bringing our focus to today's technology one thing immediately springs to mind: we have to deal with an inordinate amount of data like nowhere else in history. Nearly everything that we produces a measure of easily verifiable data, whether it is at work through productive metrics or as consumers of products and services that can tallied, allowing this very data to be tabulated and used for noticing patterns and trends over a period of time. Data is the smallest unit of meaning, while information is compiled data that serves a certain purpose, while knowledge is information that can be used to gain an end within a context.

In short, Data is the representation of "facts" or "observations" whereas information refers to the meaning thereof (according to some interpretation). Knowledge, on the other hand, refers to the ability to use information to achieve intended ends. The transformation of data into information and further into knowledge depends largely on advanced systems capable of doing this processing at the same time that it allows human analysts to check the data and make sense of it for future decision-making processes. These systems are what we call databases.

Database systems are nowadays an essential part of our life in modern society, causing all of us to be regular users of at least one major database throughout the course of our existence, be it library systems, bank transactions, grocery store purchases, hotel/airline reservations.

Traditional database applications are built to rely heavily on rigidly-structured textual and numeric data. As database technology continues to make forays into unknown territory to reach out to a larger number of users, it becomes clear that a new db system for analysing data should suffice to glean information from data which structured data applications can't handle. Hence nowadays we have multimedia databases and geographic databases (involving maps and satellite images).

Keeping a large amount of data and conducting regular and agile queries on them calls for a system specialised for just this: the proper handling of massive information. Thence we need a dbms (database management system) to refer to just about any collection of related data, which also have the following properties:

definition: specifying data types (and other constraints to which the data must conform) and data organization
construction: the process of storing the data on some medium (e.g., magnetic disk) that is controlled by the DBMS
manipulation: querying, updating and report generation
sharing: allowing multiple users and programs to access the database "simultaneously"
system protection: preventing database from becoming corrupted when hardware or software failures occur
security protection: guarding db against malicious or unathorised access.

Mini-World

The first step in designinig a database is to understand the business context and all the relevant interactions and transactions that take place within the business environment. This set of interactions and dynamics pertinent to the business reality can be thought of as the mini-world, a scaled-down version of a business setting that serves as the main framework for the design of a database.

A mini-world can be thought of as the model of a data base. A miniworld is a portion of the real world that can be represented in a database domain as the setting of a business activity. Examples of mini-world include a medical office, a retail business, an accounting firm etc. When we need to retrieve information from a mini-world, we handle information in a database modelled after it. When you go to a medical appointment, the receptionist will look up your name or id number in this model, a database with information of all patients. If the model keeps loyal to its miniworld, it should confirm the appointment. By the same token, we can also query the data to find information that is relevant to the business, like finding out the most frequent transactions, highest-demand products, most punctual payers etc. A miniworld can be comprised of a whole business or parts thereof (e.g.: a single department or a branch of a large corporation).