Html/Javascript widget

Tuesday, 27 December 2016

a law against entitlement

A while ago I received a request from a facebook friend to like her new page. The page was one long screed agitproping for the passing of a new law:
This page was created with the purpose of pushing through a new law which will require all daycare professionals to have suitable facilities for kids with special needs.
Unsurprisingly, this person is the parent of a special needs child. While on holiday at a ski resort the institution could not accept their child because there was accessbility for special needs chidlren on the premises. Indignant, they decide that they want a law to make their needs met the forceful way.
This is hare-brained reasoning in a nutshell. People don't need a law everytime something doesn't go their way. This applies with even more conviction when a rather unique peculiarity is at play. People would usually stay home to care for their child or arrange with someone to ensure that it was well cared for. What would never cross their minds is that we don't need a law for every unusual circumstance, but rather common sense.

Monday, 21 November 2016

Functional and non-functional requirements

In Software engineering and systems engineering, a functional requirement is something the system is supposed to do, such as calculations, technical details, data manipulation and processing and other specific functionality that expected from a system. Functional requirements are supported by non-functional requirements (also known as quality requirements), which impose constraints on the design or implementation. Non-functional requirements are often used as benchmarks for rating system performance and trustability. The plan for implementing functional requirements is detailed in the system design, while non-functional requirements are found in the system architecture plan.

A functional requirement establishes what the system should do e.g.: display a bank balance.
A non-functional requirement handles features of the system that describe its behaviour. Non-functional requirements include:
Trustability
Look and feel
performance and efficiency
Changeability
Portability and interoperability
security
correctness
flexibility
scalability
constraints.

Class diagram

The UML class diagram (UML is a modelling language for designing applications) represents the static view of an application, being useful not only for visualising, describing and documenting different aspects of a system but for wiriting executable code of a planned software application. The class diagram presents the attirbutes and operations of a class for modelling object-oriented systems. Besides classes, class diagrams also depict interfaces, associations, collaborations and constraints. It is also known as a structural diagram. A class diagram can also describe the responsibilities of a system and serve as the basis for deployment and component diagrams. When drawing a class diagram, its name should be meaninhful to the system aspect it's describing, with each element and their relationship identified in advance.
Example of class diagram. retrieved from <https://www.tutorialspoint.com/uml/uml_class_diagram.htm>

EKD - enterprise knowledge development

EKD - enterprise knowledge development - is a recent modelling methodology for analysing, understanding and documenting the components of a an organisation. This methodology aims at narrowing ties between IT and business processes and creating a common knowledge repository, ultimately serving as a tool for effective knowledge management. Applying EKD enables a wider understanding of all the areas in a company, including the social, organisational, technical and economic aspects that come into play when establishing a plan for requirements engineering.

EKD seeks to address some issues such as:

- business plan strategies;
- analysis and definition of business rules;
- business process re-engineering;
- common understanding of how a business works to support problem solving;

EKD also uses sub-models to explore business components in depth so as to offer a breakdown of the company's current standing and how it can lead to meaningful changes that will create more value to the business:

1- objective Model: Describes what the company needs to attain and avoid. This models seeks to establish priorities and how the objectives relate to problems, threats and opportunities part of the company's reality.

2- business rule model: enacts business rules, which depends on what was established in the objectives model.

3- business process model: sets the organisational processes and the interactions involved among them.

4- concept model: defines the entities and they relate to the business flows.

5- actors and resources model: defines teh actors and resources in the business processes, depicting the relationships therein. They can be people, organisational units and functions.

6- requirements and technical component model: describes the information system to support the business activities.

Sunday, 20 November 2016

Other business process techniques.

Besides BPMN, there are other standard process modelling techniques. Many of them can also be used for software process modelling. Regardless of what is being modelled, the basic purpose remains the same: present a common representation of how information flows in a process to a wide range of stakeholders.

1 - Business Process Modeling Notation (BPMN)

BPMN is a graphical representation of a business process through standard objects. It works pretty well for presenting business process information to stakeholders with little technical knowledge about process modelling, but it can also be implemented to show a greater deal of details which would be helpful to design engineers.

BPMN usesf the following building blocks;

-Flow objects: events (circles), activities (rectangles with rounded corners), and gateways (diamonds);
-Connecting objects: arrows, these indicate sequence flow (filled arrows), message flow (dashed arrows), and associations
-Swim lanes: pools (graphic container) and lanes (sub-partition of the pool)
-Artifacts: data objects, groups, and annotations

2 - Flowchart - Few standard symbols makes it a suitable choice to a wide audience since it requires little knowledge or understanding of how modelling works. While it relies heavily on sequential flows of actions, it's not optimised to depict the breakdown of individual activities. Its symbols are very similar to those used in BPMN, execpt that it uses rectangles with rounded edges for the start/end events, rather than an empty circle, while a parallelogram stands for data input and output.

3- Integrated Definition for Function Modeling  (IDEF) - IDEF is a family of methods that supports a paradigm capable of addressing the modelling needs of an enterprise and its business areas (IDEF, 2003). The IDEF family comprises different applications. For business process modelling, the most useful versions are IDEF0 and IDEF3. IDEF 0 is structured analysis and design technique a method for modelling actions, activities and decisions for organisations and systems. Effective IDEF0 models help to organise the analysis of a system and to promote good communication between the analyst and the customer. IDEF0 is useful in establishing the scope of an analysis, especially for a functional analysis. It's common to refer to IDEF0 as the box and arrow diagram, in which the box shows the function and the arrows going in and out of it indicate how operations are performed and controlled.  Activities can be described by their inputs, outputs, controls and mechanisms (ICOMs).

4- IDEF3 or Integrated DEFinition for Process Description Capture Method is a business process modelling technique, which comprises of a scenario-driven process flow description to learn how a specific system works. in other words, if you intend on finding out how the particulars of a certain system, you will do well to use IDEF3. IDEF3 enables capturing teh relationships between the actions of a given scenario and object state transition to capture the description of all possible states and conditions. The main purpose of IDEF3 is for a domain expert to express knowledge about how a system or process works.
As it may already be noticeable by now, description is a big part of IDEF3, being a keyword in this process modelling technique and having a specific meaning records of empirical observation (based on experience or common observations). Unlike description, a model is a proposed entity or state of affaris, supposed to represent objects and relations from a real-world system.
There are two IDEF3 description modes, process flow (captures knowledge of "how things work" in an organisation) and object state transition network (summarises the allowable transitions of an object throughout a particular process.).

5 - ARIS The ARIS toolset is a software tool for the depiction, upkeep and optimisation of business processes based on the ARIS framework. The ARIS toolset is split into 4 categories: control, data, organisational and functional.

6 -organogram- is a diagram that shows the structure of an organisation and the relationships of its parts and positions/jobs. As a diagram and process modelling technique, it's most efficient to depict hierarchy of organisational units. It can also be used to demonstrate the relationship of a department to another. Keywords frequently reserved for this modelling lanaguge include: organisational units, lines (to show hierachical structure), role, internal/external person and group.

7 - EPC - Event-driven process chain- An 'Event-driven process chain' (EPC) is a modeling language for describing business processes and workflows. EPCs can be used for setting up an enterprise resource planning (ERP) implementation, and for business process improvement.

Event - Events are passive elements in event-driven process chains, describing how a function or a process works or which state they result in. In the EPC graph an event is represented as hexagon. In general, an EPC diagram must start with an event and end with an event.

Function - Functions are active elements in an EPC, representing tasks or activities within the company. Functions describe transformations from an initial state to a resulting state. In the event-driven process chain graph a function is represented as rounded rectangle.

Process owner - Process owner is responsible for a function and is usually part of an organisation unit.Represented as a square with a vertical line.

Organisation unit - Organization units determine which organisational unit is responsible for a specific function. Examples are "sales department", "procurement department", etc. It is represented as an ellipse with a vertical line.

8 - FAD (funciotnal allocation diagram) - the Functional Allocation Diagram (FAD) is used to depict the Enterprise Business Services and operations for a particular integration. The average FAD should depict how input turns into output, the execution and the reousrces to make this happen.

BPMN - a brief description

Business Process Model and Notation (BPMN) is a graphical language for representing business processes in a business process model by OMG (object management group). BPMN proposes symbols and conventions that enable the user to model business processes and workflows and document information about current or proposed businesses.

Business Process Model and Notation (BPMN) is a standard for business process modelling that provides a graphical notation for specifying business processes in a Business Process Diagram (BPD), based on common flowcharting technique similar in other diagram languages. Using BPMN allows both technical and business users to simultaneously understand a graphical representation of the business processes in standardised form and find areas in the business model that could use improvement or remodelling. BPMN helps bridge the common communication gap between business process design and implementation.

Types of BPMN sub-model:

Business process modeling is used to communicate a wide variety of information to a wide variety of audiences. There are three basic types of sub-models within an end-to-end BPMN model:

Private (internal) business processes
Private business processes are internal to a specific organisation, generally called workflow. The Sequence Flow of the Process is constrained to a single Pool without crossing its boundaries. Only Message Flows can trespass the Pool boundary to show the interactions with other private business processes.

Abstract (public) processes
This represents the interactions between a private business process and another process or participant. Only those activities that communicate outside the private business process are included in the abstract process. Private activities are not shown in the publc process.

Collaboration (global) processes
the interactions now take place between two or more business entities. These interactions are defined as a sequence of activities that represent the message exchange patterns between the entities involved.

As a universal notation for process modelling, BPMN uses common elements to represent the interactions and information flows in its graphical representations. They are split into 5 categories:

Flow objects: Events, activities, gateways

Data Objects: data input, data output, singular and collective representation of data, data storage and message

Connecting objects:  Sequence flow, message flow, association
Swim lanes: Pool, lane
Artifacts: group, annotation (some authors include data objects within this category).




Connection Objects in BPMN
Types of Gateway, a category of flow objects

Thursday, 3 November 2016

Business Process Modelling and introduction to BPMN

Process modelling can be understood as a set of business activities depicted by graphical conventions for showing the end-to-end chain of events for primary, support and management processes. A perspective from a processual point of view lets all stakeholders understand all components of the business process, as well as the parts that need to communicate with other areas of the organisation. BY modelling a process we can represent how business works in a thorough and accurate manner. The process can be modelled according to the level of details adquate for the intended stakeholder. A nontechnical staekholder will require a simpler representation of the business processes, while a higher level of details is necessary for a technical stakeholder, whose role in the company usually demands a more comprehensive reading of the process diagram.

The main use of process modelling is to identify all the business areas involved in an activity, in addition to visualising the steps needed to successfully accomplish said activity in a most efficient way. Therefore, it can be inferred from process modelling that it helps understand the structure and dynamics at play of every organisational area and spot current problems and potential workarounds besides assuring that engineers and users have a common understanding of the organisation's inner workings.

A process model may comprise one of more diagrams, with each diagram containing its distinct set of objects and information about their behaviour and interaction with otehr objects. Modellilng a process usually involves illustrations using  icons and their relationships with other icons and entities. These icons and the symbols used for signalling their behaviour aren't created at random; but, rather, they are usually adapted from a standardised notation. Among the many advantages of using a standard for process modelling we can cite the use of a convention for universal understanding of the process components, consistency in use and meaning of the model and the possibility for portability of the business process diagram between different tools. It's important to notice that the standards for process modelling aren't applicable in different areas, there being specific modelling languages for established areas. Some of the languages include BPMN (business process model and notation), flowcharts, UML (unified modelling language), ARIS, IDEF (integrated definition language) etc.

BPMN (business process model and notation).

The primary goal of BPMN is to provide a notation easy to be understood by all stakeholders, including business users, developers and personel who will overlook and manage the processes. This standard looks to bridge the business process analysis and the process implementation. Thus it can be said that the main objective of BPMN is to propose a simplified means of communicating information among all involved in a business process. It bears mentioning that there are three basic types of modelling in a BPMN framework:

private/internal processes - also known as workflow or BPM processes. Modelling within a private view means that all information flow will be contained within a single lane.

public/ abstract - a visual representation of a private business process and another process or participant (remember that in BPMN a participant is also a pool). A public process differs from its private counterpart in that it throws into the mix external communication and control flow mechanisms to other participants. Other processes don't show here. Its main concern is to show to the external world the messages exchanged with another process or entity.

collaboration/global- full display of all teh possible interactions between two or more entities. The whole string of activities can be viewed here, including the messages exchanged back and forth among the participants. This model can hold one or more processes. In a collaboration process model, each participant is identified by a pool (as in a swiming pool) and the information flow can't cross over to the neighbouring swimlane. The communication between different pools is accomplished by messages carried over by association lines.

Monday, 24 October 2016

how to return the values of an array in ascending order in C#

An array that reads 10 values entered into it and returns them in ascending order. For years people have struggled to come up with a solution to this without resorting to a ready made solution like quick sort. The simplest form of this algorithm is shown below.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ConsoleApplication3
{
    class Program
    {
        static void Main(string[] args)
        {

            int[] Array = new int[5];

            int temp;

            Console.WriteLine("type in 5 values");
            for (int i=0; i<5; i++)
            {
                Array[i] = int.Parse(Console.ReadLine());

            }

//so far we've seen the easy part, the one where you just have to follow the syntax of the //programming language.

            for (int i = 0; i < Array.Length - 1; i++)
            {

                for (int j = i + 1; j < Array.Length; j++)
                {
                    if (Array[i] > Array[j])
                    {
//the first for loop cycles through the array up to to the position prior to last.
// the second for loop runs through t he whole length of the array starting at index 1 rather than 0. a //temp var holds the greater value between the two compared values in the if statement.
//the array will then at position [i] get the value of [j], which is one space ahead of [i], while
//[j] receives the value of the temp var, which stores the value from the relational operation >

                        temp = Array[i];
                        Array[i] = Array[j];
                        Array[j] = temp;

                    }

                }

            }

            Console.Write("Sorted:");

            foreach (int sort in Array)
                Console.Write("{0} ", sort);
            Console.ReadKey();

//in foreach, you don't need to know the length of the array. by setting a variable of the integer type,
// you can display all integer values from the array.
        }
     
        }
    }

Wednesday, 28 September 2016

Why I stopped watching collective sports altogether

I could never wrap my head around the fact that sports fans choose willingly to live vicariously through the success of others, especially for athletes who serve as a propaganda tool to push their pbvious narrative. Much better to play sports yourself or do exercise such as running or weightlifting. Live vicariously through yourself and be the man that you want to admire. 
I understand that we do not live on an isolated island and that sports fandom used to serve the purpose of bringing together communities and the like, but when we live in an atomised society that has degenerated so much to the point that self-destructive behaviour is praised and even encouraged while nuclear family values are frowned upon, is it really worth it to spend your disposable income or waste time to basically give sanction to organisations that are working to destroy your nation and your people?

Thursday, 1 September 2016

CISC - complex instruction set computing

Complex instruction set computing (CISC) is a processor design where single instructions can execute several low-level operations (such as a load from memory, an arithmetic operation, and a memory store). CISC processors are also capable of multi-step operations or addressing modes within single instructions. The hallmark of CISC processors is that memory load and store operations are performed along with arithmetic operations. In contrast to CISC designs, RISC uses uniform instruction length for almost all instructions and employ distinct load/store-instruction.
 The features below exclusive to CISC:

 - CISC chips have a large amount of different and complex instructions (variable length)
- CISC machines generally make use of complex addressing modes.
- Different machine programs can be executed on CISC machines (it's RISC that favours uniformity).
- CISC machines use micro-program control unit.
- CISC processors have limited number of registers (RISC design favours more registers).

Disk Controller

The disk controller is the electronic assemblage that governs the mechanics of a a hard drive, floppy disk or other kind of disk drive. Its role is to control the rotating spindle, the position of the heads for reading and writing and interpret the electrical signals received to convert them into data in a particular location on the surface of the hard drive. Early disk controllers were identified by their storage methods and data encoding, typically implemented on a separate controller card. Modified frequency modulation (MFM) controllers were the most common type in small computers, used for both floppy disk and hard disk drives. Run length limited (RLL) controllers used data compression to increase storage capacity by about 50%. 
The most common interface provided nowadays by disk controllers are PATA (IDE) and Serial ATA for home use. High-end disks use SCSIFibre Channel or Serial Attached SCSI. Disk controllers can also control the timing of access to flash memory which is not mechanical in nature (i.e. no physical disk).

Monday, 29 August 2016

Program counter

The program counter (PC), or the instruction pointer (IP) in Intel x86 microprocessors, and on occasion called the instruction address register (IAR), is a processor register that indicates where a computer is in its program sequence. Its role is similar to that of a GPS in that it points the direction the computer is at in its program sequence. Actually, it's not the whole computer's direction, only the instruction being executed in the processor. A register is a kind of memory that stays closest to the processor and is directly accessed by it for quick access. Not the most powerful for storage means, but the fastest ofthem all, even faster than cache memory, RAM and secondary memory ( memory used for storing personal files like an HDD, flash memory etc.

In most architectures, the program counter is incremented after fetching an instruction. It means that it's called counter it's actually made of an accumulation function. It's also worth remembering that a typical processor works instructions in the same fetch, execute, decode, execute cycle and the very first step prompts the program counter to start incrementing. After being being incremented it holds the memory address of the next instruction to be executed. The act of holding a memory address is known as pointing to the next instruction. Instructions are under normal circumstances fetched following a predictable order. This pattern can be altered if the control transfer instruction inserts a new value in the PC. It's like if you're the teacher calling the roll and a new value in the program counter causes you to skip over some students' names. In the processor, what may cause these skips are branches (sees to it that the next instruction is fetched from somewhere else in the memory), subroutine calls (saves the contents of the PC elsewhere) and returns (retrieves the saved contents (thanks for the subroutine calls) and puts it back in the PC, resuming normal operation).

Saturday, 20 August 2016

Piaget peeks into the role of subjectivity and knowledge building

For Piaget, there's no way to dissociate cognition from affection, given that intelleuctual development stems from these two elements: affection and cognition are inseparable and interwoven in all symbolic and sensory-motor actions. Affection addresses feelz -a communication channel through which hankerings, needs, qualms, fears are relayed. There cannot be behaviour shaped from affection alone, with no accompanying cognitive element. It's just as unlikely to find behaviour crafted out of cognition only. Even though both cognition and affection are entwined in some given behaviour, they look different as for their nature... It's clear that the affective factors are actually involved in the most abstract forms of intelligence. In order for a scholar to solve an algebra problem, there should be either intrinsic interest in addition to
extrinsic interest or a starting want. While at work, pleasure states, disappointment, anxiety, as much as a sense of faigue, struggle and dullness warp into the scene. Upon completion of his chore, feelings of success or failure might occur; and at length, the student may experience aesthetic feelings rising springing from the coherency of his solution. (WADSWORTH, 1997, p. 37).

Learning calls for feelz: desirable and unwanted ones alike. It demands tenderness that goes beyond the outside realm of physicial touch; learning in its truest sense overreaches the soul in an entreaty to allow dreams to come to fruition through the power of knowledge. We learn through our senses, which enables us to accept the overt array of opportunities the world has to offer us. Its calling draws us forth and although we may halt at times, the process carries on.

Learning is akin to an intermittent saunter: we have to start over daily. 

Henceforth, affection and emotions cannot be cast aside or refused in the educational process. For Piaget, intellectual development and affectivity possess two relevant aspects to be visualised vis à vis their intertwining relationship:
drive for the intelelctual activity and selection:

- drive for intellectual motivation: in order for an intellectual endeavour to set in, it requires a triggering, a desiring factor, that is, something should switch on the motivation for knowledge.

-selection: an intelelctual activity converges onto situations or particular objects; interest relates to a desire for something. For Piaget, what spurs selection is affectivity and interest and not the cognitive activities unto themselves.

It can be inferred thus that affectivity for Piaget means to gudie: from affection to knowledge, from hardships to potentiality, from insecurity to confidence, from certainties to problematisations. Hence, one can understand affectivity as one of the underpinnings of desires, interests and concrete deeds. The role
allotted for affectivity in cognitive functioning is to either stall or speed up this very functioning, paving the way for new frameworks to flourish or promoting inhibitions and blocks.

Thursday, 18 August 2016

Popper's three worlds

The three worlds is a way of understanding the existence of our reality by splitting it into 3 worlds, Namely, they are comprised of the outer world (the realm of physical matter and every possible bioma on this earth), subjectivity (thought, feelz, experiences) and the world of objective thought process (mathematical concepts, logical reasoning etc).

A fierce proponent is Karl Raimund Popper whose name the three  worlds concept has been closely tied to. As regards the third world, Popper took an approach similar to Charles S. Peirce's, claiming that the product of the objective contents of a man's thoughts are fashioned after his very creation, which is ruled by his own existence. Subjectivity is the entity that mediates both one's external and spiritual world.


World 1 showcases the physical world and all the elements therein
 
The interaction of World 1 and World 2
Interaction between World 1 and World 2 gives rise to the theory of Cartesian dualism, which avows that the universe is made of two main entities: Res Cogitans and Res Extensa. Popperian enthusiasts uphold the idea that physical and mental states exist and are pefectly capable of fully functioning interaction with one another.

The interaction of World 2 and World 3

The interaction of World 2 and World 3 is based on the theory that World 3 is partially autonomous. For example, the development of scientific theories in World 3 leads to unintended consequences, in that problems and contradictions are discovered by World 2. Another example is that the process of learning affects world 2 as world 3 expands its reaches.

The interaction of World 1 and World 3
World 3 contains all abstractions necessary to make sense of mathematical and physical laws. This means that the same objects which inhabit world 1 are wrapped around meaningful codes in world 3 and accessed through thought processes that allow said objects to be manipulated within the cognitive realm of human intellect.

Retrieved from http://vannevar.blogspot.com.br/2009/05/experience-karl-poppers-three-worlds.html

Wednesday, 17 August 2016

Software Interrupt - a short definition

A software interrupt is an explicit call of a subfunction (mostly an operating system function). It has nothing to do with asynchronous disruptions, although both commonly use the same interrupt table. Common mnenomics include:
INT xxh (Intel 8086 Interrupt)
SC xxh ( Zilog Z8000 System Call)
TRAP xh (Motorola 68000 Trap )
CALL 0005h (CP/M-80, Intel 8080/Zilog Z80 had no special command for this.
Some funcion calls are summoned from programmes with help from special dependencies of some commands. That's why the number for the necessary subfunction should be known. These numbers are used as index in an interrupt table, which contains the starting address of the sub programme.

Saturday, 6 August 2016

8 golden rules of Interface Design

The 8 golden tules of Interface Design were created in a 1988 book by Ben Schneiderman serving mainly as guidelines for the effective design of interactive interfaces according to the major principles of usability, communicability and applicability.

Strive for consistency - Consistent sequences of actions are to be used in similar situations. The same should be done with word choice, which is best used in prompts, screens and menus. Successfully navigating a screen once should ensure that the user can navigate the other screens in similar fashion.

Enable frequent users to use shortcuts - As the frequency of use increases, so does the user's desire to cut back on time used to do the same actions and to increase the pace of interaction. Shortcuts, function keys and macro facilities help speed up interactions for users who are already acquainted with the system.

Offer informative feedback.- feedback should be provided for every action. For frequent and common actions, the response can be kept to a minimum, while major actions call for a higher level of details.

Design dialog to yield closure.- every group of actions should be signalled with a beginning, middle and end. Upon finishing a set of actions there should be a system response informing the user of its successful completion and that he is good to proceed to the next group of actions.

Offer simple error handling - system designers should create a system that won't allow the user to make a serious error that would compromise the whole system. The system itself should detect inconsistencies and offer simple dialogues for understanding and handling the error.

Permit easy reversal of actions - just make sure that errors can be undone, thus allowing for the exploration of unfamiliar paths.

Support internal locus of control - this means giving users the perception that they are in charge of the system and it responds to their actions accordingly.

Reduce short-term memory load - The limitation of human information processing in short-term memory requires that displays be kept simple, multiple page displays be consolidated, window-motion frequency be reduced, and sufficient training time be allotted for codes, mnemonics, and sequences of actions.

Friday, 5 August 2016

Affordance

Affordance is ,simply put, the way an object is supposed to be used. An object's affordance depends on a host of properties, not the least of which is the way the designer chose to assign its attributes in such a way that conforms to what most of its kind have. Should it differ too much from what other objects of its kind present to the user, it might alienate him and not find so much purchase on his mind as the stakeholders intended. It's always convenient for the object's affordance to fit common aspects of its nature, implying that designers should be careful not to overplay their creativity and come up with something entirely unconventional lest users aren't going to be comfortable handling it.

The concept of Affordance isn't limited to the designer's capacity for developing objects following an easily recognisable pattern. Affordance also includes visual clues, the way that a user views an object should be enough for him to know how to effectively use it without previous instruction or reading. How to make use of an object without having to stop now and then to reorient himself wondering what to do next. A cursory glance down the object's physical frame should be enough to know right away how to handle it. It shouldbe clear by now that affordance relies on the user's world knowledge to navigate mundane surroundings. Good rules of affordance requires a balance between the individual variety within the users contingency and the multifaceted assortment of elements that make up the object features. For affordance to occur completely, cultural, physical, logical and psychological variables have to be fatored in.

William Gaver divided affordances into three categories: perceptible, hidden, and false.

    A false affordance is an apparent affordance that does not have any real function, meaning that the actor perceives nonexistent possibilities for action. A good example of a false affordance is a placebo button.
    A hidden affordance indicates that there are possibilities for action, but these are not perceived by the actor. For example, it is not apparent from looking at a shoe that it could be used to open a wine bottle.
    For an affordance to be perceptible, there is information available such that the actor perceives and can then act upon the existing affordance.


 

Monday, 1 August 2016

Nielsen's 10 heuristics

Jakob Nielsen's heuristics are heuristics used for ensuring proper usability in interface design. The heuristics were published in 1994 in a joint effort with an associate of his and are still widely used to this day with no changes whatsoever:

1 - Visibility of system status: System should always keep users informed about the inner proceedings and provide immediate feedback for every action the user performs.

2 - Match between system and the real world: language used in the system should be intelligible to the user with little to no cognitive effort involved. The elements in the system should follow a logical order and its interface has to follow common conventions to the user, like using a red colour for danger signs and yellow for warning and green or blue for dialogues informing the user that something has occurred as expected.

3 - User control and freedom:  Upon making a mistake, the user should have an easily identified way out. Support for redo and undo operations is the norm for accomplishing this heuristics.

4 - Consistency and standards: users shoulsn't have to wonder what to do next due to lack of consistency between one screen and the other. The human capacity for pattern recognition should be fully exploited in order to not baffle users with screens and dialogues that differ from previous ones. After executing a certain task and advacning onwards to the next step and a new screen is presented, the newest screen shouldn't dissent from the previous one in layout.

5 - Error prevention: make sure your system is as bug-free as possible. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.

6 - Recognition rather than recall: this dovetails nicely into the consistency and standards heuristics. Users should instinctly grasp the meaning of the interface without resorting to his memorisation skills. Everything needed should be presentable within the user's field of vision. Instructions for use of the system should be visible or easily retrievable whenever appropriate.

 7 - exibility and efficiency of use: System should cater to both experienced and novice users. Efficiency of use means that users can tailor the system to perform repeating tasks.

8 - Aesthetic and minimalist design: Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.

9 - Help users recognise, diagnose, and recover from errors: error messages should be written in plain language without befuddling string combinations (see microsft windows error codes for a counter-example). Besides, they should point to the proper problem and accurately describe a solution.

10 - Help and documentation: Should be easy to get to and relevant to the current task. Again, concision and coherence need to be observed in order to design effective in-system help.

Saturday, 30 July 2016

ISO 9241-11 in a nushell

ISO 9241-11 is a set of INternational Standards started in 1998 by the INternational Standard Association. This international standard sets the rules for ergonomics for computer office work, aiming to establish guidelines for the users' health and safety and prescribes the untold benefits of gauging usability through performance and user satisfaction according to in-company context. This context comprises users, tasks, equipment and Physical/social environment.

Cognitive walkthrough - a more precise definition

The cognitive walkthrough is a usability evaluation method in which one or more evaluators work through a series of tasks and ask a set of questions from the perspective of the user.Its focus is on understanding the system's learnability for new or infrequent users. The objective of cognitive walktrhough is to identify usability problems in order to  evaluate ease of learning to use a system by exploring it. This method seeks to probe the following:

- relationship between how designers and users conceptualise a task;

- proper choice of vocabulary for on-screen terms;

- adequate feedback for an action;

In order to carry out this evaluation, it's required to set up a preparation stage to define:

-hypothise about user and the supposed knowledge they have about a task and its underlying interface;

- tasks scenario, thought through from a collection of important and recurrent tasks;

- correct sequence of actions to complete a given task;

- design blueprint illustrating each step and the ensuing interface changes.

The procedure to run this evaluation involves the following steps:

- projectist shows design proposal;

- evaluators think up situations about an interface/user interface based on the prior tasks scenario;

- evaluators simulate the execution of the task, asking questions as they go along;

- evaluators make note of key points that users need to know before getting started on the task and learn upon doing it.

The point of using a cognitive walkthrough as method for evaluating human-computer interaction is to make sure that users are capable of navigating a system interface by trial and error with no required training.

Thursday, 28 July 2016

Interface (computing)

Interface is the part of a system that supports communication with the user. The concept originally came from natural sciences to mean the threshold between states. It was used to describe the contents of a system as a black box, from which only the surface is known, therefore making communication possible only with said surface. Two neighbouring black boxes can only communicate with each other if their surfaces "match up". Nowadays an interface is a shared boundary across which software, computer hardware, peripheral devices, humans and combinations of these exhance information.

In addition for both interacting boxes it doesn't matter how their inner parts read the message and how the response is crafted based on the received input. The understanding is thta a border is a part of the self, and the black boxes need only to know the facing sides in order to insure communication. That matches the original Latin term inter "between" and facies "looks", later anglicized to face.

If one regards any system as a coherent whole, if it's worth-analysing, he will take it down into its individual parts. The position at which the starting and contact points function (upon which communication is established), represent the individual parts. To put them to use, these individual parts have to be put together again to become a greater whole than the sum of its individual parts.

Tuesday, 26 July 2016

cognitive walkthrough


Cognitive walkthrough is usability inspection method belonging to the family of analytical evaluation processes as opposed to empirical evaluation processes like usability test. In cognitive walkthrough a hypothetical user follows a course of action prescribed by a usability expert, who takes notes of the cognitive effort of the user to find his way in the interface. The goal of cognitive walkthrough is to determine the level of user learnability to use a service. This makes it clear for the software developers spot design flaws in the interface that affect user activity. Common issues brought to the fore by cognitive walkthrough include bad menu design and unsuitable mechanisms to undo an undesirable action. Action sequences

A typical cognitive walkthrough is split into 4 steps:

1- Define input - To first lay out the foundations for effective cognitive walkthrough, it's suggested to define the features of the target user. This involves establishing the user group most likely to be the end consumers of the product and what knowledge and experience they are most likely to have. Next some example tasks should be set up. These may range from one to numerous taks representative of the work environment of the end user. It's important that these tasks be chosen to match as close as possible the real tasks performed on the system. The next substep is the sequence of actions to perform the chosen tasks is determined, in an attempt to predict which path the user the user is most likely to choose.

2- check sequence of actions. The observer takes note of the solution path employed by the user to accomplish a given task, heeding the user's input and how he concluded that was the most efficient way to carry out the intended end. Some questions commonly asked at this stage are:
Will the user manage to produce the right effect?
Will the user be able to recognise the correct action to what he's trying to do?
Will the user establish a connection between the correct action and the desired effect?

 3- record critical information. At this point the observer will have two sorts of information obtained during the product analysis: 3-a) information about the user's experience and knowledge to successfully execute an action considering the various sequences necessary to do so; 3-b) information about actions that actually led to errors and thus to problems with teh user.

4- interface revision. The goal of cognitive walkthrough is to identify flaws in the interface, which in term leads to improvements in the interface design which should lead to overall better user satisfaction and more efficient system use.

As one can notice, a cognitive walkthrough starts with a task analysis suggesting the sequence of steps or actions required by a user to accomplish a task. The designers and developers of the software then walk through the steps as a group, asking themselves a set of questions at each step, gathering data during the walkthrough. Afterwards a report of potential issues is compiled. Finally the software is redesigned to address the issues identified.

The 8 Gestalt Laws

Gestalt Laws

Gestalt psychology is a branch of psychology that tries to explain how we apply pattern recognition of harmony of shapes in a frenetic world. The main principle of Gestalt is that the mind is capable of perceiving things as a global whole in order to make meaningful readings of common everyday objects. The main premise is that when the human mind exerts the principles of gestalt, the whole has a reality of its own regardless of its component parts. In a nutshell, the perception of an object or system only has meaning if it's complete. Were one to take it apart into its consituent parts, each bit would hold no relevance for the understanding of how it plays out its functions as opposed when it was part of the functioning whole. Another way of referring to Gestalt principles is that the whole is greater than the sum of its parts. Just assembling together a pile of components won't mean as much as having the same items working in unison to achieve a common goal. Another way of understanding this notion is that the whole is greater than the sum. According to gestaltists, the organisation of cognisition processes accounts for our faculty of recognising patterns and predicting behaviours. There are 8 laws to describe the underlying cognitive process that allows for our coherent organisation of information:

1 - Law of Proximity— The law of proximity states that when an individual perceives an assortment of objects they perceive objects that are close to each other as forming a group. For example, in the picture below, there are 72 circles, but we perceive the collection of circles in groups, associating the ones close to each other as a single group.


The Law of Proximity implies that we consider the circles belonging to groups according to how close they are to each other.




2 - Law of Similarity— This law states that assorted ietms are perceptually grouped together if they bear a striking similarity to each other, like shape, colour, shading or other visible qualities. For example, the figure illustrating the law of similarity portrays 36 circles all equal distance apart from one another forming a square. In this depiction, 18 of the circles are shaded dark and 18 of the circles are shaded light. We perceive the dark circles as grouped together, and the light circles as grouped together forming six horizontal lines within the square of circles. This perception of lines is due to the law of similarity.

Law of Similarity or how like belongs to like.
 



3 - Law of Closure—The law of closure states that people perceive objects such as shapes, letters, pictures etc., as a whole unit when they are not complete even if parts of it are missing. For instance, when reading text from another interlocutor and you realise a mispelled or incomplete word you use your perception to fill in the gaps,like in incomplete words like hippopotamu, distinctiv, semiti, hobbl and indstry. The persumption of what the word is supposed to be happens on the subconscious level. The Law of Closure is widely applied when taking notes and writing on-field reports which need to be quickly done without much worry to form when we use abbreviations and shot forms for words. A good strategy of the law of closure in this context is using only consonants when writing down words, except for the first word e.g.: adrs for address. cty for city. redsgn for redesign etc.

4- Law of Symmetry— the mind perceives objects as being symmetrical and forming around a center point. IN this process the mind seeks to form a coherent shape in order to better information about said object.


5 - Law of Common Fate— this law states that objects are perceived as lines that move along the smoothest path. Experiments using the visual sensory modality found that we perceive elements of objects to have trends of motion, which indicate the path that the object is on.

6 - Law of Continuity— the elements of an object are integrated into perceptual wholes if they are aligned within an object. In cases where there is an overlap of the lines and shapes between objects, the two objects are perceived as two single uninterrupted entities.


7 - Law of Good Gestalt— elements are perceived as belonging to the same group if they have a pattern that is regular, simple and orderly. Regular and simple shapes are favoured over irregular and complex-looking shapes for good design and easily memorable signs.

8 - Law of Past Experience—The law of past experience implies that under some circumstances visual stimuli are categorized according to past experience. If two objects tend to be observed within close proximity, or small temporal intervals, the objects are more likely to be perceived together. For example, the English language contains 26 letters that are grouped to form words using a set of rules. If an individual reads an English word they have never seen, they use the law of past experience to interpret the letters "L" and "I" as two letters beside each other, rather than using the law of closure to combine the letters and interpret the object as an uppercase U.

Sunday, 24 July 2016

Usability - all you will ever need to know

Usability, communicability, applicability. These are, simply put, the main pillars that support the whole concept of human-computer interaction. Usability is concerned with user satisfaction of the system of which he is a user and how efficient said system is. Communicability is the way the system interface manages to get information across to the user. In other words, a system with good communicability is a system whose icons, menus, dialogue boxes and any form of communicative means are arranged in such a way that it doesn't take much time for the user to understand the intended purpose the system wants to convey. Any message on screen should be clear, concise and right to the point, with no useless jibba jabba involved. Applicability deals with the usefulness of the system. A system with a high level of applicability is multipurpose and capable of being used in contexts other than the intended one it was orginally designed for. Ideally, an interface is said to have high applicability value if it can potentially increase the user's skill to perform a certain task he's been assigned to.

Usability may seem by far the most important of the three core concepts. Actually, usability is the most transparent requirement for good human-computer interaction from the user's point of view. Moreover,
a system that has an expected degree of usability also more often than not addresses the other two requests for optimal HCI interaction in equally desired levels.

Usability can also be understood as the effectiveness, efficiency and satisfaction with which specified users achieve specified goals in particular environments, where effectiveness is the accuracy to completely achieve a desired goal in a given context, while efficiency is the resources to goal achieved ratio. Satisfaction addresses the question of comfort and acceptability of a work system by the users and people affected by its use.

Friday, 22 July 2016

Cognitive-Experiential Self-Theory (CEST)

Cognitive-Experiential Self-Theory (CEST) is a dual-process model of perception devised by Seymour Epstein, based around the idea that people use two separate systems for information processing: analytical-rational and intuitive-experiential. The former is deliberate, slow and logical, while the latter is fast, automatic, and emotionally driven. They operate independently of each other, but the result of their interaction produces behaviour and conscious thought, giving rise to a person's observable personality.On an individual level, there might be a marked difference in preference for one processing means over another.This difference can be measured using the Rational Experiential Inventory (REI) which takes into account two criteria for cognitive activity: Need for Cognition (conscious effort to process information) and Faith in Intituition (experiential measure).

Although it might seem superficially more appealing to exercise the rational system at the expense of the experiential system, it doesn't work this way. Suppressing the experiential system and contriving for the rational system to be overrepresented on a daily basis will eventually cause knowledge and skills that are regarded as part of the analytical-rational domain to shift to the intuitive-experiential sphere, although it's entirely possible for our analytical-rational to mend the performance of the experiential system. Given enough cognitive resources, it's possible for the rational system to even hold sway over the amount of influence that the experiential system exerts on our decision-making processes.

Monday, 13 June 2016

Virtualisation Sprawl.

Virtualisation sprawl is a undesirable process that happens when the amount of virtual machines on a network becomes so huge that it's impossible to manage them effectively. Virtual machines might be created with ease,but having one entails the same level of responsability and compliance than a normal one. Among the measures taken by administrators to check virtualisation sprawl is standardising VM image files, while shelving Vms that are not being used to their full potential.
Virtual machine lifecycle management (VMLM) tools can help administrators oversee the implementation, delivery, operation and maintenance of virtual machines over the course of their existence. Such tools can furnish administrators with a dashboard user interface (UI) to display the virtual machines running on a network along with the physical machines hosting them and the proprietary licences installed on them.  

Saturday, 28 May 2016

IT controls

Information technology controls

IT controls are specific activities to ensure that business objectives are met. IT control objectives tie in to the core tenets of IT security, namely:  confidentiality, integrity and availability of data, serving as a valuable tool for the IT management of any business enterprise. The IT controls are performed by persons or systems and often come in two categories: IT general controls (ITGC) and IT application controls. ITGC include controls over the Information Technology (IT) environment, computer operations, access to programmes and data, programme development and programme changes. IT application gauges the processing of a transaction, checking for the accuracy of input-output routines.


IT General Controls (ITGC)

ITGC help ensure the reliability of data generated by IT systems to make sure that both their operation and output are reliable. The following types of control are common for ITGC:

Control environment -controls designed to shape the corporate culture or "tone at the top."
Change management procedures - controls designed to ensure the changes meet business requirements and are authorised.
Source code/document version control procedures - controls designed to protect the integrity of programme code
Software development life cycle standards - controls designed to ensure IT projects are effectively managed.
Logical access policies, standards and processes - controls designed to manage access based on business need.
Incident management policies and procedures - controls designed to address operational processing errors.
Problem management policies and procedures - controls designed to identify and address the root cause of incidents.
Technical support policies and procedures - policies to help users perform more efficiently and report problems.
Hardware/software maintenance - configuration, installation, testing, management standards, policies and procedures.
Disaster recovery/backup and recovery procedures - to enable continued processing despite adverse conditions.
Physical security - controls to ensure the physical security of information technology from individuals and from environmental risks.



IT application controls

These are fully automated to ensure that data are thoroughly processed with outright accuracy from input through output, also ensuring the privacy and security of transmitted data in the process. IT application controls may include the following:

Completeness checks - controls that ensure all records were processed from initiation to completion.
Validity checks - controls that ensure only valid data is input or processed.
Identification - controls that ensure all users are uniquely and irrefutably identified.
Authentication - controls that provide an authentication mechanism in the application system.
Authorisation - controls that ensure only approved business users have access to the application system.
Input controls - controls that ensure data integrity fed from upstream sources into the application system.
Forensic controls - controls that verify the logical accuracy of data based on input and output checksums.

IT controls and the CIO/CISO
The organisation's Chief Information Officer (CIO) or Chief Information Security Officer (CISO) is typically responsible for the security, accuracy and the reliability of the systems that manage and report all the company's data.

Internal control frameworks
COBIT (Control Objectives for Information Technology)
COBIT is a common framework for best practices in both IT general and application controls. Its basic premise is based on IT processes satisfying business requirements through specific IT control activities and the evaluation of said processes. The four COBIT major domains are: plan and organise, acquire and implement, deliver and support and monitor and evaluate.
 Another common framework is COSO (Committee of Sponsoring Organizations of the Treadway Commission), which uses five elements of internal control: control environment, risk assessment, control activities, information and communication and monitoring.

Monday, 23 May 2016

Hot site

A hot site is an off-premises location to allow a business to continue computer and network operations in the event of a computer or equipment disaster. A hot site has all the equipment necessary for a business to resume regular activities, including jacks for phones, backup data, computers and related peripherals. Hot sites can be part of a business continuity plan or disaster recovery plan, where plans and procedures are laid out in the event that normal business activities cannot go on as usual in the normal location.
If an enterprise's data center becomes inoperable, for instance, all data processing operations are moved to a hot site. A cold site is similar in concept but provides office space only, it's up to the  customer to provide and install all the equipment needed to resume operations. It takes longer to get an enterprise in full operation after the disaster when a cold site is used.

Saturday, 21 May 2016

Difference between contingency plan and contingency planning

A contingency plan is made for emergency response, backup operations and post-disaster recovery for information systems and IT facilities when an unexpected service interruption takes place. The objective of this plan is to lead to minimal impact upon normal operations service capacity in the event of damage to information systems or facilities in which they're stored. Crisis management is part of the contingency plan in that it describes the measures to be taken to manage unexpected occurrences 
in the operational environment. 

Contingency planning addresses how to keep a company's critical processes running if any disruption happens. It's how a company prepares its staff for emergency situations.  A major element to that preparation is envisioning all of the potential emergencies that could occur. If a scenario would be dire if it occurred, it is worth the time and resources to prepare for its realization. Businesses, governments and other organizations that employ contingency planning consider a range of scenarios that could affect their operations, aiming to be comprehensive in the scope of emergencies that they examine. Overlooking a possible category of emergency in the contingency planning phase can leave an organization poorly prepared when a crisis hits. A helpful analogy to helping visualise the importance of contingency planning is how you would react if your house suddenly caught fire. It might be tempting to think that the obvious answer is to gather all your belongings that can be savaged and make a run for it as fast as possible. However, it's wishful thinking in that it's a prediction based on what you would instinctively do should that happen. 


Nevertheless, an effective contingency planning can't work out on instinctive reaction alone. If anything, it's counterproductive to rely on knee-jerk reflexes alone while throwing caution and reason to the winds. In order to be best prepared when a fire starts, you should think of all the possible steps to be taken to ensure as much safety as possible while minimising material loss. This would include a series of procedures like ensuring that both the fire brigade's number and a handy phone are within reach for contact, placing fire extinguishers at strategic locations and becoming familiarised with operating them and deploying them quickly whenever applicable, placing exit signs in order to coordinate a safe escapade, making sure that emergency stairways are always unobstructed etc. The procedures might seem glaringly obvious from a reasonable standpoint, but in the heat of the moment it's easy to get caught in the conundrum and not do the most reasonable thing. Officially documenting a contingency planning helps prevent chaotic behaviour that might only exacerbate the trouble. Same applies to decreasing the damage done to an organisation's operations and information systems. Contingency planning also goes through a series of similar stages such as identification of critical processes, Business Impact Analysis,  plan development and documentation, training, testing and maintenance and update.   





Common steps for contingency planning. From <http://homeworkhelpexperts.blogspot.com.br/2011/07/steps-of-developing-contingency-plan.html>

Friday, 20 May 2016

Contingency Plan

A contingency plan is a plan devised for handling disasters, although any plan designed for any outcome any than the expected one can be said to be a contingency plan. Often referred to as plan B, it's applied for risks of great magnitude that would have wide reaching consequences for the business. It's often necessary to have a contingency plan in order to avoid the possibility of freeze-out that occurs when someone is faced with a situation previously thought of as unlikely to occur. A contingency plan describes not only how to prepare for disaster but also how one should act in the actual occurrence of one. It usually describes the tasks, responsibilities and competences assigned to the staff of an organisation. Devising an effective contingency plan includes a business impact analysis and assessment stage.

The seven-steps outlined for an IT contingency plan publication are:

1. Develop the contingency planning policy statement. A formal policy provides the authority and guidance necessary to develop an effective contingency plan.

2. Conduct the business impact analysis (BIA). The BIA helps identify and prioritize information systems and components critical to supporting the organization’s mission/business functions.

3. Identify preventive controls. Measures taken to reduce the effects of system disruptions can increase system availability and reduce contingency life cycle costs.

4. Create contingency strategies. Thorough recovery strategies ensure that the system may be recovered quickly and effectively following a disruption.

5. Develop an information system contingency plan. The contingency plan should contain detailed guidance and procedures for restoring a damaged system unique to the system’s security impact level and recovery requirements.

6. Ensure plan testing, training and exercises. Testing validates recovery capabilities, whereas training prepares recovery personnel for plan activation and exercising the plan identifies planning gaps; combined, the activities improve plan effectiveness and overall organization preparedness.

7. Ensure plan maintenance. The plan should be a living document that is updated regularly to remain current with system enhancements and organizational changes.


Reference:

ROUSE, Margaret. Contingency Plan. WhatIs.com Retrieved from <http://whatis.techtarget.com/definition/contingency-plan>.

Tuesday, 17 May 2016

ICA-AtoM

ICA stands for Council on Archives, while AtoM' is short for "Access to Memory". It's a fully web-based repository that supports both single and multi-repository implementations.. It is an open-source system built to streamlining archival workflow, enabling repositories to launch their collections online with minimal cost and effort. It supports multiple collection types in a user-friendly way according to best practices for accessability, making it flexible and customisable for both small and large companies alike.

As a project, ICA-AtoM is free, open-source software developed by Artefactual Systems in collaboration with the ICA Program Commission (PCOM) and a growing network of international partners.

Sunday, 15 May 2016

Booster Bag

A booster bag is a handmade bag used to shoplift, typically from retail stores, libraries, and any other location employing security detectors to deter theft. The booster bag can be an ordinary shopping bag, backpack, pocketed garment, or other inconspicuous container whose inside is lined with a special material, typically multiple layers of aluminium foil.

An item is placed inside the booster bag, which is in effect a Faraday cage. This provides electromagnetic shielding, with the result that electronic security tags inside the bag may not be detected by security panels in the detector antennas at the store exit.

Booster bags have been used by professional shoplifters for several years. Using them, a shoplifter can steal dozens of items with very little effort.

The name "booster bag" comes from "boost" in the slang sense of "shoplift."

Principal

A principal in computer security is any entity such as people, computers, services, processes and threads or any group of such things that can be authenticated by a computer system or network.
They need to be identified and authenticated before they can be assigned rights and privileges over resources in the network. It typically has an associated identifier that allows it to be referenced for identification or assignment of properties and permissions.

Bastion host


A bastion host is a server that either offers services for an open internet connection or works as a proxy to access the internet, requiring it be particularly protected against malicious attacks. In order for this security to be achieved, a server is protected both from the outside network in a demilitarised zone and intranet accesses through the use of a firewall, set to restrict contact between these two zones. As a critical strong point in network security, a bastion host is a computer that is fully built to withstand attacks. This practice forcefully represses direct access from an internal network to an external network like the world wide web by making sure that only the necessary ports are open at any given time. BY this process it's not possible for a web server to have access to any host part of the network unless it's specifically stated by the firewall that port 80 should be used. The Operating system of a bastion host should only be used by experienced administrators, with the successful implementation of a log data system for activity monitoring. In addition, the admin should report on any known vulnerability to avert any threat in advance. Thus the administrator has to measure the situation to see whether the vulnerability is relevant enough to be fixed by a simple configuration tweak or if a whole installation patch may be needed to protected the affected system from attacks.


Bastion host fully exposed to outside attacks.
from: http://www.sabronet.com/secure/firewall.html

Mobile Code

Mobile code is any application that moves across a network that may run on a local system without requiring installation. Examples of mobile code are scripts (JavaScript, VBScript), Java applets, ActiveX controls, Flash animations, Shockwave movies (and Xtras), and macros embedded within Microsoft Office files.

Mobile code can also be downloaded and run on a target workstation by email either by an email attachment or an HTML email body. Due to their portable nature, they download and execute without the user's awareness.

Mobile code can also be encapsulated or embedded in other file formats originally intended for read-only purposes, like JavaScript in a PDF.

Friday, 13 May 2016

single sign-on

Single sign-on (SSO) is the practice of offering users access to all of their password-protected applications by inputing only one master password. This lets users unlock other systems and accounts secured by different passwords by doing only one authetication check. This is a meaningful aspect of reducing password fatigue brought on by having to type in one's username and password regularly
whenever access to a system or an account is necessary.  This is mostly accomplsihed using the LIghtweight Directory Access Protocol(LDAP) and related LDAp databases on directory servers. By having only one authenticating system, it's possible for all the services and acconts to be inherited by only one password, while all known passwords and usernames from the man operating the system are stored.

Security through obscurity

In IT security, security through obscurity is the deliberal concealing of one's own IT infrastructure in order to make it less susceptible to intruding attacks. Its most common premise is that making the system or component not visible through conventional lenses improves the odds of it not actually being harassed by threats posed by hackers. It's common for systems relying on security through obscurity to implement other security measures, with the cloaking from outside forces being effectively an extra leayer of security. The technique stands in contrast with security by design and open security, although many real-world projects include elements of all strategies.
Relying on security through obscurity alone without caring about real safety measures leads to a false sense of security, which is often more dangerous than not addressing security at all.


Example of security through obscurity.
Retrieved on 13/5/2016 from: http://www.treachery.net/articles_papers/tutorials/why_security_through_obscurity_isnt/index-2.html

Tuesday, 3 May 2016

Information Systems glossary.


ü  5 S: japanese organisational method for workplace organisation. The 5 S are:
seiketsu - standardisation of the previous 3.
seiri - (sort) remove unnecessary items and dispose of them properly.
seiso - (shine_ keep workplace clean.
seiton - (systematic arrangement) Arrange all necessary items so that they can   be easily selected for use.
shitsuke - (sustain) discipline and regular audits.

ü  Artificial Intelligence – is an academic field that studies the capacity of machines and computers to exhibit intelligent behaviour where intelligent behaviour means the ability to scan one’s surroundings and make a sound decision to maximise one’s chance of success based on analysis and processing of information according to what the context requires. In business settings AI can be used to execute routines that call for low-skilled work and interprete data in ways conducive to pattern recognition, thus providing valuable insight for decision-making processes.

ü  B2B – business to business. The practice in e-commerce for two companies to conduct business between themselves. When this happens, one company plays the role of supplier while the other party plays the role of client.

ü  B2C – Business to consumer. The procurement of goods or services by a typical consumer.

ü  BSC – balanced scorecard. Is a business approach that considers other perspectives other than profits. Besides the obvious financial perspective, there is also the customer perspective where the business should think through its practices in order to better cater to its intended audience. The reflective question typically asked is “how the customer sees us”?. Internal business processes is another perspective concerned with answering the question “what must we excel at?”. Leaning and growth considers the question “how can we continue to improve and innovate?”. This perspective relates to efficacy as internal business processes relates to efficiency.

ü  Business Intelligence – is a system that is known for being dynamic and flexible, optimised to present users with information in a format that facilitates decision-making and best business practices.

ü  CMM - capability maturity model. Framework for measuring how mature a company’s processes are. The levels are:

Level 1 - Initial (Chaotic): It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes.

Level 2 – Repeatable: It is characteristic of processes at this level that some processes are repeatable, possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress.

Level 3 – Defined: It is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place (i.e., they are the AS-IS processes) and used to establish consistency of process performance across the organization.

Level 4 – Managed: It is characteristic of processes at this level that, using process metrics, management can effectively control the AS-IS process (e.g., for software development ). In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. Process Capability is established from this level.

Level 5 – Optimising: It is a characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements.


ü  Customer Relationhsip management – refers to all instances and channels of communication with the company’s client. This system’s purpose is to glean all kinds of data from the business company through a variety of means including call center, data mining, surveys, post-sales follow-up, logs on other company’s systems etc. The aim of CRM is to make customer service more effective by predicting customer’s preferences and tailoring products and service approaches to better suit said preferences.

ü  Data Cube - A data cube is a three-dimensional (3D) range of values that are generally used to describe multidimensional extensions of two-dimensional tables. It can be viewed as a collection of identical 2-D tables stacked upon one another.

ü  Data mart – is a subset of the data warehouse dedicated to a specific area within an organisation.

ü  Data mining – the extraction of data about customers’ preferences and behaviour patterns as observed on market channels such web browsing, previous transactions emailed orders etc.

ü  Data warehouse - is a system used for reporting and data analysis, gathering input from different sources.  It consists of a database, kept separate from the organisation's operational database. There is no frequent updating done in a data warehouse. And it contains consolidated historical data, which helps executives analyse the company as a whole and to organise, understand and use their data to take strategic decisions.

ü  Decision Supporting System – DSS. A computerised information system used to support decision-making in a business. A DSS enables users to sift through and analyse massive reams of data and compile information that can be used to solve problems and make better decisions. It’s often unbound to the company’s other systems and draws information from existing datasets to provide more reliable means oriented towards accurate decision-making.

ü  DIKC – data, information, knowledge and competence. The 4 basic concepts of any information system. Data is the smallest unit of meaning for a computer system. A piece of data on itself means nothing. But once it’s processed, it becomes information. Information is data with meaning in readable form for a human user. Knowledge is awareness and understanding of how information can be applied to a useful end. This often entails making a better-sounding decision or make rearrangements so processes can run with more efficiency and efficacy. Competence is mastery of knowledge in real life scenarios. It means that to possess the faculty needed to expertly use knowledge whenever the situation calls for it.

ü  e-business – the instance of making all of a company’s processes available in electronic format.

ü  e-commerce- a subset of e-business that is concerned with the actual transaction between company and consumer, resulting in the sale of a product/service to a final user.

ü  EDI - Electronic Data Interchange. The computer-to-computer exchange of business documents in a standard electronic format between business partners.

ü  Enterprise Application Integration – is the use of technologies and services across an enterprise to enable the integration of software applications and hardware systems. EAI is related to middleware technologies. It is responsible for successfully integrating all of a company’s existing systems, which may cause some problems. An SOA is a common solution to enterprise application integration challenges.

ü  Enterprise Resource Planning – A system specialised in integrating all of a company’s processes. This brings down barriers between departments and allows information to be readily available in real time for all the right users. Information that is altered causes an instant update in all related areas.  In order to accomplish this, information should come from a unique data base.

ü  Expert Systems - modelled after artificial intelligence systems, these systems are more objective as they seek to simulate the reasoning of an expert professional. This system is fed input by its users and other systems and applications, and organises information and solves problems in specialised formats, as if the analysis had been done by a proper expert.

ü  Neural network – Is the natural acquisition of knowledge on how to perform a task with more efficiency and efficacy. The same process is responsible for machine learning. As a computer program written with neural network built-in capacity, it’s optimised for analysing the best way to perform something by comparing how it was done the previous time. Each iteration improves upon the previous attempt, adding more depth to the procedures of how the job is supposed to get done using the minimal possible amount of effort.  

ü  OLAP – Online analytical processing. Query tool for generating reports at a much faster rate than OLTP. The info is read-only destined for management staff for decision-making purposes.

ü  OLTP – online transaction processing. Tool for data query that focuses on operational chores conducted on a daily basis. Data is stored in standard data sets and although it gets a lot of input as expected from regular business routines, it’s poorly conceived to generate clear reports proper for management analysis. It’s best suited for technical staff due to high detail level.

ü  Organisation and Method (O&M)- Systematic examination of an organization's structure, procedures and methods, and management and control, from the lowest (clerical or shop-floor) level to the highest (CEO, president, managing director). Its objective is to assess their comparative efficiency in achieving defined organizational aims. O&M concerns itself mainly with administrative procedures (not manufacturing operations) and employs techniques such as operations research, work-study, and systems analysis.

ü  PDCA – plan, do,check, act. Also called Demming cycle, is a cyclic approach to continuous improvement in business processes. In the plan step, methodologies should be drawn up to achieve established goals,while the do step consists of actually performing the course of action based on the previous phase. During the Check stage, the manager is supposed to carefully survey the process and trawl it for flaws and ways to make it more efficient and effective. The act stage is where the actual changes are implemented.

ü  Supply Chain Management – is system built for mapping the entirety of businesses processes from raw material production and transportation to the moment the finished good is sold to final customer.