Html/Javascript widget

Saturday, 28 May 2016

IT controls

Information technology controls

IT controls are specific activities to ensure that business objectives are met. IT control objectives tie in to the core tenets of IT security, namely:  confidentiality, integrity and availability of data, serving as a valuable tool for the IT management of any business enterprise. The IT controls are performed by persons or systems and often come in two categories: IT general controls (ITGC) and IT application controls. ITGC include controls over the Information Technology (IT) environment, computer operations, access to programmes and data, programme development and programme changes. IT application gauges the processing of a transaction, checking for the accuracy of input-output routines.

IT General Controls (ITGC)

ITGC help ensure the reliability of data generated by IT systems to make sure that both their operation and output are reliable. The following types of control are common for ITGC:

Control environment -controls designed to shape the corporate culture or "tone at the top."
Change management procedures - controls designed to ensure the changes meet business requirements and are authorised.
Source code/document version control procedures - controls designed to protect the integrity of programme code
Software development life cycle standards - controls designed to ensure IT projects are effectively managed.
Logical access policies, standards and processes - controls designed to manage access based on business need.
Incident management policies and procedures - controls designed to address operational processing errors.
Problem management policies and procedures - controls designed to identify and address the root cause of incidents.
Technical support policies and procedures - policies to help users perform more efficiently and report problems.
Hardware/software maintenance - configuration, installation, testing, management standards, policies and procedures.
Disaster recovery/backup and recovery procedures - to enable continued processing despite adverse conditions.
Physical security - controls to ensure the physical security of information technology from individuals and from environmental risks.

IT application controls

These are fully automated to ensure that data are thoroughly processed with outright accuracy from input through output, also ensuring the privacy and security of transmitted data in the process. IT application controls may include the following:

Completeness checks - controls that ensure all records were processed from initiation to completion.
Validity checks - controls that ensure only valid data is input or processed.
Identification - controls that ensure all users are uniquely and irrefutably identified.
Authentication - controls that provide an authentication mechanism in the application system.
Authorisation - controls that ensure only approved business users have access to the application system.
Input controls - controls that ensure data integrity fed from upstream sources into the application system.
Forensic controls - controls that verify the logical accuracy of data based on input and output checksums.

IT controls and the CIO/CISO
The organisation's Chief Information Officer (CIO) or Chief Information Security Officer (CISO) is typically responsible for the security, accuracy and the reliability of the systems that manage and report all the company's data.

Internal control frameworks
COBIT (Control Objectives for Information Technology)
COBIT is a common framework for best practices in both IT general and application controls. Its basic premise is based on IT processes satisfying business requirements through specific IT control activities and the evaluation of said processes. The four COBIT major domains are: plan and organise, acquire and implement, deliver and support and monitor and evaluate.
 Another common framework is COSO (Committee of Sponsoring Organizations of the Treadway Commission), which uses five elements of internal control: control environment, risk assessment, control activities, information and communication and monitoring.

Monday, 23 May 2016

Hot site

A hot site is an off-premises location to allow a business to continue computer and network operations in the event of a computer or equipment disaster. A hot site has all the equipment necessary for a business to resume regular activities, including jacks for phones, backup data, computers and related peripherals. Hot sites can be part of a business continuity plan or disaster recovery plan, where plans and procedures are laid out in the event that normal business activities cannot go on as usual in the normal location.
If an enterprise's data center becomes inoperable, for instance, all data processing operations are moved to a hot site. A cold site is similar in concept but provides office space only, it's up to the  customer to provide and install all the equipment needed to resume operations. It takes longer to get an enterprise in full operation after the disaster when a cold site is used.

Saturday, 21 May 2016

Difference between contingency plan and contingency planning

A contingency plan is made for emergency response, backup operations and post-disaster recovery for information systems and IT facilities when an unexpected service interruption takes place. The objective of this plan is to lead to minimal impact upon normal operations service capacity in the event of damage to information systems or facilities in which they're stored. Crisis management is part of the contingency plan in that it describes the measures to be taken to manage unexpected occurrences 
in the operational environment. 

Contingency planning addresses how to keep a company's critical processes running if any disruption happens. It's how a company prepares its staff for emergency situations.  A major element to that preparation is envisioning all of the potential emergencies that could occur. If a scenario would be dire if it occurred, it is worth the time and resources to prepare for its realization. Businesses, governments and other organizations that employ contingency planning consider a range of scenarios that could affect their operations, aiming to be comprehensive in the scope of emergencies that they examine. Overlooking a possible category of emergency in the contingency planning phase can leave an organization poorly prepared when a crisis hits. A helpful analogy to helping visualise the importance of contingency planning is how you would react if your house suddenly caught fire. It might be tempting to think that the obvious answer is to gather all your belongings that can be savaged and make a run for it as fast as possible. However, it's wishful thinking in that it's a prediction based on what you would instinctively do should that happen. 

Nevertheless, an effective contingency planning can't work out on instinctive reaction alone. If anything, it's counterproductive to rely on knee-jerk reflexes alone while throwing caution and reason to the winds. In order to be best prepared when a fire starts, you should think of all the possible steps to be taken to ensure as much safety as possible while minimising material loss. This would include a series of procedures like ensuring that both the fire brigade's number and a handy phone are within reach for contact, placing fire extinguishers at strategic locations and becoming familiarised with operating them and deploying them quickly whenever applicable, placing exit signs in order to coordinate a safe escapade, making sure that emergency stairways are always unobstructed etc. The procedures might seem glaringly obvious from a reasonable standpoint, but in the heat of the moment it's easy to get caught in the conundrum and not do the most reasonable thing. Officially documenting a contingency planning helps prevent chaotic behaviour that might only exacerbate the trouble. Same applies to decreasing the damage done to an organisation's operations and information systems. Contingency planning also goes through a series of similar stages such as identification of critical processes, Business Impact Analysis,  plan development and documentation, training, testing and maintenance and update.   

Common steps for contingency planning. From <>

Friday, 20 May 2016

Contingency Plan

A contingency plan is a plan devised for handling disasters, although any plan designed for any outcome any than the expected one can be said to be a contingency plan. Often referred to as plan B, it's applied for risks of great magnitude that would have wide reaching consequences for the business. It's often necessary to have a contingency plan in order to avoid the possibility of freeze-out that occurs when someone is faced with a situation previously thought of as unlikely to occur. A contingency plan describes not only how to prepare for disaster but also how one should act in the actual occurrence of one. It usually describes the tasks, responsibilities and competences assigned to the staff of an organisation. Devising an effective contingency plan includes a business impact analysis and assessment stage.

The seven-steps outlined for an IT contingency plan publication are:

1. Develop the contingency planning policy statement. A formal policy provides the authority and guidance necessary to develop an effective contingency plan.

2. Conduct the business impact analysis (BIA). The BIA helps identify and prioritize information systems and components critical to supporting the organization’s mission/business functions.

3. Identify preventive controls. Measures taken to reduce the effects of system disruptions can increase system availability and reduce contingency life cycle costs.

4. Create contingency strategies. Thorough recovery strategies ensure that the system may be recovered quickly and effectively following a disruption.

5. Develop an information system contingency plan. The contingency plan should contain detailed guidance and procedures for restoring a damaged system unique to the system’s security impact level and recovery requirements.

6. Ensure plan testing, training and exercises. Testing validates recovery capabilities, whereas training prepares recovery personnel for plan activation and exercising the plan identifies planning gaps; combined, the activities improve plan effectiveness and overall organization preparedness.

7. Ensure plan maintenance. The plan should be a living document that is updated regularly to remain current with system enhancements and organizational changes.


ROUSE, Margaret. Contingency Plan. Retrieved from <>.

Tuesday, 17 May 2016


ICA stands for Council on Archives, while AtoM' is short for "Access to Memory". It's a fully web-based repository that supports both single and multi-repository implementations.. It is an open-source system built to streamlining archival workflow, enabling repositories to launch their collections online with minimal cost and effort. It supports multiple collection types in a user-friendly way according to best practices for accessability, making it flexible and customisable for both small and large companies alike.

As a project, ICA-AtoM is free, open-source software developed by Artefactual Systems in collaboration with the ICA Program Commission (PCOM) and a growing network of international partners.

Sunday, 15 May 2016

Booster Bag

A booster bag is a handmade bag used to shoplift, typically from retail stores, libraries, and any other location employing security detectors to deter theft. The booster bag can be an ordinary shopping bag, backpack, pocketed garment, or other inconspicuous container whose inside is lined with a special material, typically multiple layers of aluminium foil.

An item is placed inside the booster bag, which is in effect a Faraday cage. This provides electromagnetic shielding, with the result that electronic security tags inside the bag may not be detected by security panels in the detector antennas at the store exit.

Booster bags have been used by professional shoplifters for several years. Using them, a shoplifter can steal dozens of items with very little effort.

The name "booster bag" comes from "boost" in the slang sense of "shoplift."


A principal in computer security is any entity such as people, computers, services, processes and threads or any group of such things that can be authenticated by a computer system or network.
They need to be identified and authenticated before they can be assigned rights and privileges over resources in the network. It typically has an associated identifier that allows it to be referenced for identification or assignment of properties and permissions.

Bastion host

A bastion host is a server that either offers services for an open internet connection or works as a proxy to access the internet, requiring it be particularly protected against malicious attacks. In order for this security to be achieved, a server is protected both from the outside network in a demilitarised zone and intranet accesses through the use of a firewall, set to restrict contact between these two zones. As a critical strong point in network security, a bastion host is a computer that is fully built to withstand attacks. This practice forcefully represses direct access from an internal network to an external network like the world wide web by making sure that only the necessary ports are open at any given time. BY this process it's not possible for a web server to have access to any host part of the network unless it's specifically stated by the firewall that port 80 should be used. The Operating system of a bastion host should only be used by experienced administrators, with the successful implementation of a log data system for activity monitoring. In addition, the admin should report on any known vulnerability to avert any threat in advance. Thus the administrator has to measure the situation to see whether the vulnerability is relevant enough to be fixed by a simple configuration tweak or if a whole installation patch may be needed to protected the affected system from attacks.

Bastion host fully exposed to outside attacks.

Mobile Code

Mobile code is any application that moves across a network that may run on a local system without requiring installation. Examples of mobile code are scripts (JavaScript, VBScript), Java applets, ActiveX controls, Flash animations, Shockwave movies (and Xtras), and macros embedded within Microsoft Office files.

Mobile code can also be downloaded and run on a target workstation by email either by an email attachment or an HTML email body. Due to their portable nature, they download and execute without the user's awareness.

Mobile code can also be encapsulated or embedded in other file formats originally intended for read-only purposes, like JavaScript in a PDF.

Friday, 13 May 2016

single sign-on

Single sign-on (SSO) is the practice of offering users access to all of their password-protected applications by inputing only one master password. This lets users unlock other systems and accounts secured by different passwords by doing only one authetication check. This is a meaningful aspect of reducing password fatigue brought on by having to type in one's username and password regularly
whenever access to a system or an account is necessary.  This is mostly accomplsihed using the LIghtweight Directory Access Protocol(LDAP) and related LDAp databases on directory servers. By having only one authenticating system, it's possible for all the services and acconts to be inherited by only one password, while all known passwords and usernames from the man operating the system are stored.

Security through obscurity

In IT security, security through obscurity is the deliberal concealing of one's own IT infrastructure in order to make it less susceptible to intruding attacks. Its most common premise is that making the system or component not visible through conventional lenses improves the odds of it not actually being harassed by threats posed by hackers. It's common for systems relying on security through obscurity to implement other security measures, with the cloaking from outside forces being effectively an extra leayer of security. The technique stands in contrast with security by design and open security, although many real-world projects include elements of all strategies.
Relying on security through obscurity alone without caring about real safety measures leads to a false sense of security, which is often more dangerous than not addressing security at all.

Example of security through obscurity.
Retrieved on 13/5/2016 from:

Tuesday, 3 May 2016

Information Systems glossary.

ü  5 S: japanese organisational method for workplace organisation. The 5 S are:
seiketsu - standardisation of the previous 3.
seiri - (sort) remove unnecessary items and dispose of them properly.
seiso - (shine_ keep workplace clean.
seiton - (systematic arrangement) Arrange all necessary items so that they can   be easily selected for use.
shitsuke - (sustain) discipline and regular audits.

ü  Artificial Intelligence – is an academic field that studies the capacity of machines and computers to exhibit intelligent behaviour where intelligent behaviour means the ability to scan one’s surroundings and make a sound decision to maximise one’s chance of success based on analysis and processing of information according to what the context requires. In business settings AI can be used to execute routines that call for low-skilled work and interprete data in ways conducive to pattern recognition, thus providing valuable insight for decision-making processes.

ü  B2B – business to business. The practice in e-commerce for two companies to conduct business between themselves. When this happens, one company plays the role of supplier while the other party plays the role of client.

ü  B2C – Business to consumer. The procurement of goods or services by a typical consumer.

ü  BSC – balanced scorecard. Is a business approach that considers other perspectives other than profits. Besides the obvious financial perspective, there is also the customer perspective where the business should think through its practices in order to better cater to its intended audience. The reflective question typically asked is “how the customer sees us”?. Internal business processes is another perspective concerned with answering the question “what must we excel at?”. Leaning and growth considers the question “how can we continue to improve and innovate?”. This perspective relates to efficacy as internal business processes relates to efficiency.

ü  Business Intelligence – is a system that is known for being dynamic and flexible, optimised to present users with information in a format that facilitates decision-making and best business practices.

ü  CMM - capability maturity model. Framework for measuring how mature a company’s processes are. The levels are:

Level 1 - Initial (Chaotic): It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes.

Level 2 – Repeatable: It is characteristic of processes at this level that some processes are repeatable, possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress.

Level 3 – Defined: It is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place (i.e., they are the AS-IS processes) and used to establish consistency of process performance across the organization.

Level 4 – Managed: It is characteristic of processes at this level that, using process metrics, management can effectively control the AS-IS process (e.g., for software development ). In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. Process Capability is established from this level.

Level 5 – Optimising: It is a characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements.

ü  Customer Relationhsip management – refers to all instances and channels of communication with the company’s client. This system’s purpose is to glean all kinds of data from the business company through a variety of means including call center, data mining, surveys, post-sales follow-up, logs on other company’s systems etc. The aim of CRM is to make customer service more effective by predicting customer’s preferences and tailoring products and service approaches to better suit said preferences.

ü  Data Cube - A data cube is a three-dimensional (3D) range of values that are generally used to describe multidimensional extensions of two-dimensional tables. It can be viewed as a collection of identical 2-D tables stacked upon one another.

ü  Data mart – is a subset of the data warehouse dedicated to a specific area within an organisation.

ü  Data mining – the extraction of data about customers’ preferences and behaviour patterns as observed on market channels such web browsing, previous transactions emailed orders etc.

ü  Data warehouse - is a system used for reporting and data analysis, gathering input from different sources.  It consists of a database, kept separate from the organisation's operational database. There is no frequent updating done in a data warehouse. And it contains consolidated historical data, which helps executives analyse the company as a whole and to organise, understand and use their data to take strategic decisions.

ü  Decision Supporting System – DSS. A computerised information system used to support decision-making in a business. A DSS enables users to sift through and analyse massive reams of data and compile information that can be used to solve problems and make better decisions. It’s often unbound to the company’s other systems and draws information from existing datasets to provide more reliable means oriented towards accurate decision-making.

ü  DIKC – data, information, knowledge and competence. The 4 basic concepts of any information system. Data is the smallest unit of meaning for a computer system. A piece of data on itself means nothing. But once it’s processed, it becomes information. Information is data with meaning in readable form for a human user. Knowledge is awareness and understanding of how information can be applied to a useful end. This often entails making a better-sounding decision or make rearrangements so processes can run with more efficiency and efficacy. Competence is mastery of knowledge in real life scenarios. It means that to possess the faculty needed to expertly use knowledge whenever the situation calls for it.

ü  e-business – the instance of making all of a company’s processes available in electronic format.

ü  e-commerce- a subset of e-business that is concerned with the actual transaction between company and consumer, resulting in the sale of a product/service to a final user.

ü  EDI - Electronic Data Interchange. The computer-to-computer exchange of business documents in a standard electronic format between business partners.

ü  Enterprise Application Integration – is the use of technologies and services across an enterprise to enable the integration of software applications and hardware systems. EAI is related to middleware technologies. It is responsible for successfully integrating all of a company’s existing systems, which may cause some problems. An SOA is a common solution to enterprise application integration challenges.

ü  Enterprise Resource Planning – A system specialised in integrating all of a company’s processes. This brings down barriers between departments and allows information to be readily available in real time for all the right users. Information that is altered causes an instant update in all related areas.  In order to accomplish this, information should come from a unique data base.

ü  Expert Systems - modelled after artificial intelligence systems, these systems are more objective as they seek to simulate the reasoning of an expert professional. This system is fed input by its users and other systems and applications, and organises information and solves problems in specialised formats, as if the analysis had been done by a proper expert.

ü  Neural network – Is the natural acquisition of knowledge on how to perform a task with more efficiency and efficacy. The same process is responsible for machine learning. As a computer program written with neural network built-in capacity, it’s optimised for analysing the best way to perform something by comparing how it was done the previous time. Each iteration improves upon the previous attempt, adding more depth to the procedures of how the job is supposed to get done using the minimal possible amount of effort.  

ü  OLAP – Online analytical processing. Query tool for generating reports at a much faster rate than OLTP. The info is read-only destined for management staff for decision-making purposes.

ü  OLTP – online transaction processing. Tool for data query that focuses on operational chores conducted on a daily basis. Data is stored in standard data sets and although it gets a lot of input as expected from regular business routines, it’s poorly conceived to generate clear reports proper for management analysis. It’s best suited for technical staff due to high detail level.

ü  Organisation and Method (O&M)- Systematic examination of an organization's structure, procedures and methods, and management and control, from the lowest (clerical or shop-floor) level to the highest (CEO, president, managing director). Its objective is to assess their comparative efficiency in achieving defined organizational aims. O&M concerns itself mainly with administrative procedures (not manufacturing operations) and employs techniques such as operations research, work-study, and systems analysis.

ü  PDCA – plan, do,check, act. Also called Demming cycle, is a cyclic approach to continuous improvement in business processes. In the plan step, methodologies should be drawn up to achieve established goals,while the do step consists of actually performing the course of action based on the previous phase. During the Check stage, the manager is supposed to carefully survey the process and trawl it for flaws and ways to make it more efficient and effective. The act stage is where the actual changes are implemented.

ü  Supply Chain Management – is system built for mapping the entirety of businesses processes from raw material production and transportation to the moment the finished good is sold to final customer.