APPLICATION OF OPERATIONS RESEARCH METHODS IN MATHEMATICAL MODELLING OF PRODUCTION PROBLEMS


APPLICATION OF OPERATIONS RESEARCH METHODS IN MATHEMATICAL MODELLING OF PRODUCTION PROBLEMS
INTRODUCTIONS
A mathematical model is a description of a system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in the natural sciences (such as physics, biology, earth science, meteorology) and engineering disciplines (such as computer science, artificial intelligence), as well as in the social sciences (such as economics, psychology, sociology, political science). Physicists, engineers, statisticians, operations research analysts, and economists use mathematical models most extensively. A model may help to explain a system and to study the effects of different components, and to make predictions about behaviour.
The mathematical model of a production system is defined by the following five components:
1. Type of a production system: It shows how the machines and material handling devices are connected and defines the flow of parts within the system.
2. Models of the machines: They quantify the operation of the machines from the point of view of their productivity, reliability, and quality.
3. Models of the material handling devices: They quantify their parameters, which affect the overall system performance.
4. Rules of interactions between the machines and material handling devices: They define how the states of the machines and material handling devices affect each other and, thus, facilitate uniqueness of the resulting mathematical description.
5. Performance measures: These are metrics, which quantify the efficiency of system operation and, thus, are central to analysis, continuous improvement, and design methods developed.
Mathematical models can take many forms, including but not limited to dynamical systems, statistical models, differential equations, or game theoretic models. These and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements often leads to important advances as better theories are developed.
Principles of Mathematical Modeling
Mathematical modeling is a principled activity that has both principles behind it and methods that can be successfully applied. The principles are over-arching or meta-principles phrased as questions about the intentions and purposes of mathematical modeling. These meta-principles are almost philosophical in nature.
These methodological modeling principles are also captured in the following list of questions and answers:
• Why? What are we looking for? Identify the need for the model.
• Find? What do we want to know? List the data we are seeking.
• Given? What do we know? Identify the available relevant data.
• Assume? What can we assume? Identify the circumstances that apply.
• How? How should we look at this model? Identify the governing
physical principles.
• Predict? What will our model predict? Identify the equations that will be used, the calculations that will be made, and the answers that will result.
• Valid? Are the predictions valid? Identify tests that can be made
to validate the model, i.e., is it consistent with its principles and assumptions?
• Verified? Are the predictions good? Identify tests that can be made to verify the model, i.e., is it useful in terms of the initial reason it was done?
There is a large element of compromise in mathematical modelling. The majority of interacting systems in the real world are far too complicated to model in their entirety. Hence the first level of compromise is to identify the most important parts of the system. These will be included in the model, the rest will be excluded. The second level of compromise concerns the amount of mathematical
manipulation which is worthwhile. Although mathematics has the potential to prove general results, these results depend critically on the form of equations used. Small changes in the structure of equations may require enormous changes in the mathematical methods. Using computers to handle the model equations may never lead to elegant results, but it is much more robust against alterations.
WHAT OBJECTIVES CAN MODELLING ACHIEVE
Mathematical modelling can be used for a number of different reasons. How well any particular objective is achieved depends on both the state of knowledge about a system and how well the modelling is done. Examples of the range of objectives are:
1. Developing scientific understanding – through quantitative expression of current knowledge of a system (as well as displaying
What we know, this may also show up what we do not know);
2. Test the effect of changes in a system;
3. Aid decision making, including
(i) Tactical decisions by managers;
(ii) Strategic decisions by planners.

CLASSIFICATIONS OF MODELS
When studying models, it is helpful to identify broad categories of models. Classification of individual models into these categories tells us immediately some of the essentials of their structure.
One division between models is based on the type of outcome they predict. Deterministic models ignore random variation, and so always predict the same outcome from a given starting point. On
the other hand, the model may be more statistical in nature and so may predict the distribution of possible outcomes. Such models are said to be stochastic.
A second method of distinguishing between types of models is to consider the level of understanding on which the model is based. The simplest explanation is to consider the hierarchy of organizational structures within the system being modelled.

STAGES OF MODELLING
It is helpful to divide up the process of modelling into four broad categories of activity, namely building, studying, testing and use. Although it might be nice to think that modelling projects progress smoothly from building through to use, this is hardly ever the case. In general, defects found at the studying and testing stages are corrected by returning to the building stage. Note that if any changes are made to the model, then the studying and testing stages must be repeated.
A pictorial representation of potential routes through the stages of modelling is:
Studying
Testing
Use
Building
This process of repeated iteration is typical of modelling projects, and is one of the most useful aspects of modelling in terms of improving our understanding about how the system works.
We shall use this division of modelling activities to provide a structure for the rest of this course.
LITERATURE REVIEW
In last few decades many efforts had been made to represent the manufacturing facility into a mathematical model. These models are of different types depending upon the type of production facility. One of them is time based model. Time is the main parameter in this model. The main objective of this type of model is reduction of time required to produce final product. Other types of models are sequence based model. The main objective of these types of model is to determine the optimal and feasible processing sequence. The hybrid type of problem can also be formulated by combination of these two models.
Some of the successful attempts of mathematical formulation and optimization are listed here. B. Naderi and A. Azab (2014) formulate the Operation-position based model; Operation-sequence based model, and heuristic models for Distributed job shop environment. They also developed the Evolutionary algorithm to solve these models. Xinyu Li,Liang Gao (2010) formulated a mathematical model of integrated process planning and scheduling. They have developed an evolutionary algorithm based method for integration and optimization. They also compared feasibility and performance of their proposed method with some previous works. J. Behnamiana (2015) solved the mixed integer linear programming by the CPLEX solver. Their problem was for small size instances scheduling. And they also compared their obtained results by heuristic method with two genetic algorithms in the large size instances. Cheol Min Joo (2015) derived a mathematical model for unrelated parallel machine scheduling problem by considering sequence and machine dependent setup times and machine dependent processing times .Xiao-Ning Shen, Xin Yao (2015) constructed a mathematical model for the multi objective dynamic flexible job shop scheduling problem. Hlynur Stefansson (2011) have studied scheduling problem from a pharmaceutical company. They decompose the problem into two parts and they compared the discrete and continuous time representations for solving the individual parts. They also enlisted pros and cons of each model. Z.X. Guo (2013) formulated the multi objective order scheduling problem with the consideration of multiple plants, multiple production departments and multiple production processes. A Pareto optimization model is developed to solve the problem
MATHEMATICAL THEORY
Optimization is a field within applied mathematics and deals with using mathematical models and methods to find the best possible alternative in decision making situations. In fact optimization is the
science of making the best possible decision. The expression “best” means that an objective should be denned, and “possible” indicates that there are some restriction for the decision.
In general, an optimization problem can be formulated as tominimize f(x);
(1) subject to x_X; in which f : Rn 7! R is the objective function and x_Rn are the decision variables. The set X _ Rn
MODEL CLASSIFICATIONS IN MATHEMATICS
Mathematical models are usually composed of relationships and variables. Relationships can be described by operators, such as algebraic operators, functions, differential operators, etc. Variables are abstractions of system parameters of interest, that can be quantified. Several classification critera can be used for mathematical models according to their structure:
• Linear vs. nonlinear: If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise. The definition of linearity and nonlinearity is dependent on context, and linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables. Similarly, a differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented entirely by linear equations, then the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation, then the model is known as a nonlinear model.
Nonlinearity, even in fairly simple systems, is often associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are strongly tied to nonlinearity.
• Static vs. dynamic: A dynamic model accounts for time-dependent changes in the state of the system, while a static (or steady-state) model calculates the system in equilibrium, and thus is time-invariant. Dynamic models typically are represented by differential equations.
• Explicit vs. implicit: If all of the input parameters of the overall model are known, and the output parameters can be calculated by a finite series of computations (known as linear programming, not to be confused with linearity as described above), the model is said to be explicit. But sometimes it is the output parameters which are known, and the corresponding inputs must be solved for by an iterative procedure, such as Newton’s method (if the model is linear) or Broyden’s method (if non-linear). For example, a jet engine’s physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle (air and fuel flow rates, pressures, and temperatures) at a specific flight condition and power setting, but the engine’s operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties.
• Discrete vs. continuous: A discrete model treats objects as discrete, such as the particles in a molecular model or the states in a statistical model; while a continuous model represents the objects in a continuous manner, such as the velocity field of fluid in pipe flows, temperatures and stresses in a solid, and electric field that applies continuously over the entire model due to a point charge.
• Deterministic vs. probabilistic (stochastic): A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables; therefore, a deterministic model always performs the same way for a given set of initial conditions. Conversely, in a stochastic model—usually called a “statistical model”—randomness is present, and variable states are not described by unique values, but rather by probability distributions.
• Deductive, inductive, or floating: A deductive model is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them. The floating model rests on neither theory nor observation, but is merely the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models.[1] Application of catastrophe theory in science has been characterized as a floating model.[2]
Significance in the natural sciences
Mathematical models are of great importance in the natural sciences, particularly in physics. Physical theories are almost invariably expressed using mathematical models.
Throughout history, more and more accurate mathematical models have been developed. Newton’s laws accurately describe many everyday phenomena, but at certain limits relativity theory and quantum mechanics must be used, even these do not apply to all situations and need further refinement. It is possible to obtain the less accurate models in appropriate limits, for example relativistic mechanics reduces to Newtonian mechanics at speeds much less than the speed of light. Quantum mechanics reduces to classical physics when the quantum numbers are high. For example, the de Broglie wavelength of a tennis ball is insignificantly small, so classical physics is a good approximation to use in this case.
It is common to use idealized models in physics to simplify things. Massless ropes, point particles, ideal gases and the particle in a box are among the many simplified models used in physics. The laws of physics are represented with simple equations such as Newton’s laws, Maxwell’s equations and the Schrödinger equation. These laws are such as a basis for making mathematical models of real situations. Many real situations are very complex and thus modeled approximate on a computer, a model that is computationally feasible to compute is made from the basic laws or from approximate models made from the basic laws. For example, molecules can be modeled by molecular orbital models that are approximate solutions to the Schrödinger equation. In engineering, physics models are often made by mathematical methods such as finite element analysis.
Different mathematical models use different geometries that are not necessarily accurate descriptions of the geometry of the universe. Euclidean geometry is much used in classical physics, while special relativity and general relativity are examples of theories that use geometries which are not Euclidean.
Some applications
Since prehistorical times simple models such as maps and diagrams have been used. Often when engineers analyze a system to be controlled or optimized, they use a mathematical model. In analysis, engineers can build a descriptive model of the system as a hypothesis of how the system could work, or try to estimate how an unforeseeable event could affect the system. Similarly, in control of a system, engineers can try out different control approaches in simulations.
A mathematical model usually describes a system by a set of variables and a set of equations that establish relationships between the variables. Variables may be of many types; real or integer numbers, boolean values or strings, for example. The variables represent some properties of the system, for example, measured system outputs often in the form of signals, timing data, counters, and event occurrence (yes/no). The actual model is the set of functions that describe the relations between the different variables.
Building blocks
In business and engineering, mathematical models may be used to maximize a certain output. The system under consideration will require certain inputs. The system relating inputs to outputs depends on other variables too: decision variables, state variables, exogenous variables, and random variables.
Decision variables are sometimes known as independent variables. Exogenous variables are sometimes known as parameters or constants. The variables are not independent of each other as the state variables are dependent on the decision, input, random, and exogenous variables. Furthermore, the output variables are dependent on the state of the system (represented by the state variables).
Objectives and constraints of the system and its users can be represented as functions of the output variables or state variables. The objective functions will depend on the perspective of the model’s user. Depending on the context, an objective function is also known as an index of performance, as it is some measure of interest to the user. Although there is no limit to the number of objective functions and constraints a model can have, using or optimizing the model becomes more involved (computationally) as the number increases.
For example, in economics students often apply linear algebra when using input-output models. Complicated mathematical models that have many variables may be consolidated by use of vectors where one symbol represents several variables.
A priori information
Mathematical modeling problems are often classified into black box or white box models, according to how much a priori information on the system is available. A black-box model is a system of which there is no a priori information available. A white-box model (also called glass box or clear box) is a system where all necessary information is available. Practically all systems are somewhere between the black-box and white-box models, so this concept is useful only as an intuitive guide for deciding which approach to take.
Usually it is preferable to use as much a priori information as possible to make the model more accurate. Therefore, the white-box models are usually considered easier, because if you have used the information correctly, then the model will behave correctly. Often the a priori information comes in forms of knowing the type of functions relating different variables. For example, if we make a model of how a medicine works in a human system, we know that usually the amount of medicine in the blood is an exponentially decaying function. But we are still left with several unknown parameters; how rapidly does the medicine amount decay, and what is the initial amount of medicine in blood? This example is therefore not a completely white-box model. These parameters have to be estimated through some means before one can use the model.
In black-box models one tries to estimate both the functional form of relations between variables and the numerical parameters in those functions. Using a priori information we could end up, for example, with a set of functions that probably could describe the system adequately. If there is no a priori information we would try to use functions as general as possible to cover all different models. An often used approach for black-box models are neural networks which usually do not make assumptions about incoming data. Alternatively the NARMAX (Nonlinear AutoRegressive Moving Average model with eXogenous inputs) algorithms which were developed as part of nonlinear system identification [3] can be used to select the model terms, determine the model structure, and estimate the unknown parameters in the presence of correlated and nonlinear noise. The advantage of NARMAX models compared to neural networks is that NARMAX produces models that can be written down and related to the underlying process, whereas neural networks produce an approximation that is opaque.
Subjective information
Sometimes it is useful to incorporate subjective information into a mathematical model. This can be done based on intuition, experience, or expert opinion, or based on convenience of mathematical form. Bayesian statistics provides a theoretical framework for incorporating such subjectivity into a rigorous analysis: we specify a prior probability distribution (which can be subjective), and then update this distribution based on empirical data.
An example of when such approach would be necessary is a situation in which an experimenter bends a coin slightly and tosses it once, recording whether it comes up heads, and is then given the task of predicting the probability that the next flip comes up heads. After bending the coin, the true probability that the coin will come up heads is unknown; so the experimenter would need to make a decision (perhaps by looking at the shape of the coin) about what prior distribution to use. Incorporation of such subjective information might be important to get an accurate estimate of the probability.
Complexity
In general, model complexity involves a trade-off between simplicity and accuracy of the model. Occam’s razor is a principle particularly relevant to modeling; the essential idea being that among models with roughly equal predictive power, the simplest one is the most desirable. While added complexity usually improves the realism of a model, it can make the model difficult to understand and analyze, and can also pose computational problems, including numerical instability. Thomas Kuhn argues that as science progresses, explanations tend to become more complex before a Paradigm shift offers radical simplification.
For example, when modeling the flight of an aircraft, we could embed each mechanical part of the aircraft into our model and would thus acquire an almost white-box model of the system. However, the computational cost of adding such a huge amount of detail would effectively inhibit the usage of such a model. Additionally, the uncertainty would increase due to an overly complex system, because each separate part induces some amount of variance into the model. It is therefore usually appropriate to make some approximations to reduce the model to a sensible size. Engineers often can accept some approximations in order to get a more robust and simple model. For example, Newton’s classical mechanics is an approximated model of the real world. Still, Newton’s model is quite sufficient for most ordinary-life situations, that is, as long as particle speeds are well below the speed of light, and we study macro-particles only.
Examples
• One of the popular examples in computer science is the mathematical models of various machines, an example is the deterministic finite automaton which is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. For example, the following is a DFA M with a binary alphabet, which requires that the input contains an even number of 0s.

CONCLUSION
Day by day the complexity of production environment is increasing. Due to this complexity it is very difficult to define the manufacturing facility in mathematical form. Mathematical formulation can be made easy by splitting the large problem into pieces and solving them separately. From literature review it can be concluded that most of the research related to production scheduling is concentrated on finding out the optimum sequence or study related to make span and cost. Very few attempts have been made for optimizing the labour.
Mathematical modeling can be very difficult if the parameters and relation between them is not known. In this study the mathematical model of the manufacturing firm is successfully formulated. This mathematical model can be further modified to obtain desired flexibility.
Formulating mathematical model is difficult but solving it can be more difficult. Sometimes the formulated model gives infeasible solution and sometimes it takes huge time to solve. Such types of problems can be solved by modern evolutionary algorithms such as Genetic Algorithm, Tabu search, Ant colony optimization etc.

REFRENCES

[1] B. Naderi, A. Azab (2014), ―Modeling and heuristics for
scheduling of distributed job shops‖, Expert Systems with Applications volume-41 page-7754–7763
[2] Xinyu Li, Liang Gao (2010), ―Mathematical modeling and
evolutionary algorithm based approach for integrated process planning and scheduling‖, Computers & Operations Research volume-37 page- 656—667
[3] J. Behnamiana (2015), ―Minimizing cost related objective in
synchronous scheduling of parallel factories in the virtual production network‖, Applied Soft Computing volume-29 page-221–232
[4]Cheol Min Joo, Byung Soo Kim (2015), ―Hybrid genetic
algorithms with dispatching rules for unrelated parallel machine scheduling with setup time and production availability‖, Computers & Industrial Engineering volume -85 page-102–109
[5] Xiao- Ning Shen, Xin Yao (2015), ―Mathematical modeling and
multi-objective evolutionary algorithms applied to dynamic flexible job shop scheduling problems‖, Information Sciences volume- 298 page – 198 – 224
[6] Hlynur Stefansson (2011), ―Discrete and continuous time
representations and mathematical models for large production scheduling problems: A case study from the pharmaceutical industry‖, European Journal of Operational Research volume-215 page- 383–392
[7]Z.X. Guo (2013), ―Modeling and Pareto optimization of multi-
objective order scheduling problems in production planning ‖,
Computers & Industrial Engineering volume-64 page-972–986
[8]T. Stock, G. Seliger (2015), ―Multi-objective Shop Floor
Scheduling Using Monitored Energy Data‖, Procedia CIRP 26 page-510 –515

DECISION MAKING THROUGH OPERATIONS RESEARCH


DECISION MAKING THROUGH OPERATIONS RESEARCH
INTRODUCTION
Operations research or operational research is a discipline that deals with the application of advanced analytical methods to help make better decisions. Operations Research is one of the popular problem solving and decision-making science. It is a collection of managerial decision making and programmable rules that provide basis for the decision making to managers at all levels of global business. As the global business environment has become very much complex and competitive, Operations Research has gained paramount significance in applications like Lean production, world-class Manufacturing systems(WCM), Six-sigma quality management, Benchmarking, in industry as airlines, service organizations, military branches, and in government, Just-in-time (JIT) inventory techniques. According to (1991) Akingbade et al, it is a problem-solving science-based activity using analysis and modelling as a basis for aiding decision-makers in organizations to improve the performance of the operations under their control. It is dealing with analyzing complex business problems and assisting managers work out the best to solve the problem & achieving objectives. According to Agbadudu, 2006, It can be said to have been in existence since the beginning of mankind. However, the concept actually emerged in 1940 during the time of world war II, when the military management of England and USA called upon the team of scientists to develop the strategies to make the most efficient and consistent use of limited military resources in the war. This study highlights the significance of operation research, different techniques and its importance in business practices. This paper indicates the importance of OR in finding the optimum solution of critical problems in business organizations. It helps in decision making process in public, private, government and the society. Such a wide usages of operational research models used by the government, industry and academicians would not only contribute to the discipline but also would contribute to enhance the quality of economic production.
Operations research, or operational research in British usage, is a discipline that deals with the application of advanced analytical methods to help make better decisions. It is often considered to be a sub-field of mathematics. The terms management science and decision science are sometimes used as synonyms.
Employing techniques from other mathematical sciences, such as mathematical modeling, statistical analysis, and mathematical optimization, operations research arrives at optimal or near-optimal solutions to complex decision-making problems. Because of its emphasis on human-technology interaction and because of its focus on practical applications, operations research has overlap with other disciplines, notably industrial engineering and operations management, and draws on psychology and organization science. Operations research is often concerned with determining the maximum (of profit, performance, or yield) or minimum (of loss, risk, or cost) of some real-world objective. Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries.
LITERATURE REVIEW
Operational research (OR) in business decision and decision making encompasses a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queueing theory and other stochastic-process models, Markov decision processes, econometric methods, data envelopment analysis, neural networks, expert systems, decision analysis, and the analytic hierarchy process. Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has strong ties to computer science and analytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, and constraints on time and computing power.
The decision making (DM) problem is of great practical value in many areas of human activities. Most widely used DM methods are based on probabilistic approaches. The well-known Bayesian theorem for a conditional probability density function (PDF) is a background for such techniques. It is needed due to some uncertainty in many parameters entered in any model which describes the functioning of many real systems or objects. Uncertainty in our knowledge might be expressed in an alternative form.
Nature of Operations Research
Operations Research is a mathematical approach for analyzing business problems and making decisions in organizations. It aims at providing rational bases for decision making by seeking to understand and structure complex situations and to use this understanding to predict system behavior and improve performance. The nature of organization is essentially immaterial. As the name implies, Operations research indicates “research on operation”. Therefore, the nature of Operations Research is to solve the problems by conducting operations (i.e activities) within business organizations. The research part of the name implies that Operations Research use an approach that resembles the way research is carried out in the established business organizations. Thus, the Operations Research involves creative decision making research that is carried out with the operations. In the other words, we can say that the nature of Operations Research to find the best solution (optimal solution) of problem.
Problem Solving Approach of Operations Research
There are many different problem solving techniques. Operation research is the one of the innovative problem solving approach Operations research. This step is characterized by research, data analysis, and creative application of the knowledge gained to scope and bound the problem. The major steps of a typical problem solving approach of operations approach are the following:
Step I. Identify Problem The first step of OR study is to identify the problem and the environment in which the problem exists. The Operations that constitute this step are visits, research, meeting, observations, etc. With the help of such operations, the OR analyst gets sufficient knowledge and support to proceed and is better prepared to formulate the problem.
Step II. Define the Problem After identifying the problem, the problem is defined with its uses, objectives and limitations of the study that are stressed in the light of the problem. The end results of this step are clear grasp of need for a solution and understanding of its nature.
Step III. Model Construction The next step in problem approach as to construct the model which is representation the real or abstract situation of the problem. These models are mathematical models based on the operations representing problem, process or environment in form of equations having relationships or formulae. The operations in this step is to defining interrelationships among variables and formulating constraint equations, usely known as OR models or searching suitable alternate models. The hypothetical model must be tested in field and modified in order to need of work under given environmental constraints. A model may also be modified if the organization is not satisfied with the results that it gives.
Step IV. Collection of Relevant Data It is a well known fact that without authentic and relevant data the results of the formulated models cannot be trusted. Hence, selection of right kind of data is a necessary step in OR problem solving process. The important part oft this step is analysis of selected data and facts, collecting opinions from people and using computer data banks. Therefore, the purpose of this step is to have sufficient input data to operate and test the model.
Step V. Testing of Solution With the help of constructed model and collected data input, the problems is solved and its solution is obtained .This solution can not be implemented immediately and this solution is used to test the model and to find its limitations if any. If the solution is not reasonable or if the model is not behaving properly, updating and modification of the model is considered at this stage. The end result of this step is solution that is desirable and supports current organizational objectives.
Importance of Operations Research in Decision-Making
Operations research applies sophisticated statistical analysis and mathematical modeling to solve an array of business and organizational problems, as well as improve decision-making. As the business environment grows more complex, companies and government agencies rely on analysis to inform decisions that were once based largely on management intuition. Originally developed by the U.S. Department of Defense during World War II, operations research has helped many large companies and government agencies make better decisions, boost performance and reduce risk.
Simplifying Complexity
Modern challenges associated with a global economy and the growth of technology have increased the complexity of the business environment. Modern corporations often strive to serve a global, rather than a regional or national, customer base and face worldwide competition. By relying on sophisticated mathematical models and advanced software tools, operations research can assess all available options facing a firm, project possible outcomes and analyze risks associated with particular decisions. The result is more complete information on which management can make decisions and set policy, according to the Institute for Operations Research and the Management Sciences, INFORMS for short, a national organization of operations research professionals.
Maximizing Data
Companies collect large amounts of data but may feel overwhelmed by the volume and lack the time or expertise to fully analyze these data, transforming them into useful information on which to base decisions. Operations research uses advanced mathematical and statistical techniques, such as linear programming and regression analysis, to help organizations make the most of their data, according to INFORMS’ Science of Better website. Through detailed analysis of the data, operations research analysts can help uncover options that lead to higher profits, more-efficient operations and less risk.
Adding Value
In its executive guide to operations research, “Seat-of-the-Pants-Less,” INFORMS reports that operations research has added value to organizations in the public and private sector alike. For example, INFORMS reported that UPS used operations research to redesign its overnight delivery network in such a way that saved more than $80 million between 2000 and 2002. Meanwhile, New Haven, Connecticut, used operations research to determine the extent to which the city’s needle exchange program reduced HIV infection rates.

CONCLUSION
The decision making (DM) problem is of great practical value in many areas of human activities. Most widely used DM methods are based on probabilistic ap- proaches. The well-known Bayesian theorem for a conditional probability density function (PDF) is a background for such techniques. It is needed due to some uncer- tainty in many parameters entered in any model which describes the functioning of many real systems or objects. Therefore, Operations research is the mathematical innovative practice of applying advanced analytical methods to help make better decisions in the business organizations. Mathematical programming has been used to solve a considerable range of problems in business organizations – forming portfolios of equities, employee oriented, customer oriented product oriented and production oriented etc. Today’s global markets and instant communications mean that customers expect high-quality products and services when they need them, where they need them. Organizations, whether public or private, need to provide these products and services as effectively and efficiently as possible by the OR mathematical tools at all.

Lastly, The goal of operations research is to provide a framework for constructing models of decision-making problems, finding the best solutions with respect to a given measure of merit, and implementing the solutions in an attempt to solve the problems. On this page we review the steps of the OR Process that leads from a problem to a solution. The problem is a situation arising in an organization that requires some solution. The decision maker is the individual or group responsible for making decisions regarding the solution. The individual or group called upon to aid the decision maker in the problem solving process is the analyst.

REFERENCES
1. Ben-Haim, Y. (2001) Information-Gap Theory: Decisions Under Severe Uncertainty, Academic Press, London.
2. Ben-Tal A., Ghaoui L. Al., and A. Nemirovski (2006) Mathematical Programming, Special issue on Robust Optimization, Volume 107(1-2).
3. Dantzig G.B. (1949) Programming of interdependent activities, Mathematical model. Econometrica, 17, 3, 200-211.
4. Ecker J.G. and Kupferschmid,M. (1988) Introduction to Operations Research, NY, Wiley.
5. Gass S.I. (1958) Linear Programming (methods and applications). NY-Toronto, London, “McGraw Hill”.
6. George, E. I. (1999) Discussion of “Model averaging and model search strategies” by M. Clyde. In Bayesian Statistics 6 (J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds.), 157–185. Oxford University Press.
7. Gomory R.E. (1963) An algorithm for integer solution to linear programming. NY-Toronto, London, “McGraw Hill”.
8. Kantorovich L.V. (1966) Mathematical models and methods of optimal economical planning. Novosibirsk, “Nauka”, 256 p. (in Russian).
9. Korbut, A.A., and Yu.Yu. Finkelstein (1969) Discreet programming. Moscow, “Nauka”, 302 p. (in Russian).
10. Kouvelis P. and G. Yu (1997) Robust Discrete Optimization and Its Applications, Kluwer.
11. Pokrovsky O.M. (2005) Development of integrated “climate forecast-multi-user” model to provide the optimal solutions for environment decision makers. – Proceedings of the Seventh International Conference for Oil and Gas Resources Development of the Russian Arctic and CIS Continental Shelf, St. Petersburg, 13-15 September, 2005, Publ. by AMAP, Oslo, Norway, September 2005, p. 661-663.
12. Pokrovsky O.M. (2006) Multi-user consortium approach to multi-model weather forecasting system based on integer programming techniques – Proceedings of the Second THORPEX
International Scientific Symposium (Landshut, 3-8 December, 2006), WMO/TD, No. 1355, p.234-235.
13. Richardson R. (2000) Skill and relative economic value of the ECMWF Ensemble Prediction
System. – Q. J. Roy. Met. Soc, 126, p.649-668.
15. Thie, P. (1988) An Introduction to Linear Programming and Game Theory, Wiley, NY.
16. Weakliem, D. L. (1999) A critique of the Bayesian informati

SIMULATION


SIMULATION
INTRODUCTION
Simulation is a way to model random events, such that simulated outcomes closely match real-world outcomes. By observing simulated outcomes, researchers gain insight on the real world. Simulation is a procedure to create a model that behaves like a real situation. Experimental results, an estimated probability can be calculated makes random guesses for each question.
Simulation is the imitation of the operation of a real-world process or system over time. The act of simulating something first requires that a model be developed; this model represents the key characteristics or behaviors/functions of the selected physical or abstract system or process. The model represents the system itself, whereas the simulation represents the operation of the system over time. (sokolowski, 2009).
LITERATURE REVIEW
Simulation is used in many contexts, such as simulation of technology for performance optimization, safety engineering, testing, training, education, and video games. Often, computer experiments are used to study simulation models. Simulation is also used with scientific modelling of natural systems or human systems to gain insight into their functioning. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is also used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may simply not exist, (carson, 2001).
Key issues in simulation include acquisition of valid source information about the relevant selection of key characteristics and behaviours, the use of simplifying approximations and assumptions within the simulation, and fidelity and validity of the simulation outcomes.
CLASSIFICATION AND TERMINOLOGY
Historically, simulations used in different fields developed largely independently, but 20th century studies of systems theory and cybernetics combined with spreading use of computers across all those fields have led to some unification and a more systematic view of the concept.
Physical simulation refers to simulation in which physical objects are substituted for the real thing (some circles use the term for computer simulations modelling selected laws of physics, but this article doesn’t). These physical objects are often chosen because they are smaller or cheaper than the actual object or system.
Interactive simulation is a special kind of physical simulation, often referred to as a human in the loop simulation, in which physical simulations include human operators, such as in a flight simulator or a driving simulator.
Human in the loop simulations can include a computer simulation as a so-called synthetic environment.
Simulation in failure analysis refers to simulation in which we create environment/conditions to identify the cause of equipment failure. This was the best and fastest method to identify the failure cause.
A computer simulation (or “sim”) is an attempt to model a real-life or hypothetical situation on a computer so that it can be studied to see how the system works. By changing variables in the simulation, predictions may be made about the behaviour of the system. It is a tool to virtually investigate the behaviour of the system under study.
Computer simulation has become a useful part of modeling many natural systems in physics, chemistry and biology, and human systems in economics and social science (e.g., computational sociology) as well as in engineering to gain insight into the operation of those systems. A good example of the usefulness of using computers to simulate can be found in the field of network traffic simulation. In such simulations, the model behaviour will change each simulation according to the set of initial parameters assumed for the environment.
Traditionally, the formal modeling of systems has been via a mathematical model, which attempts to find analytical solutions enabling the prediction of the behaviour of the system from a set of parameters and initial conditions. Computer simulation is often used as an adjunct to, or substitution for, modeling systems for which simple closed form analytic solutions are not possible. There are many different types of computer simulation, the common feature they all share is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states would be prohibitive or impossible.
Several software packages exist for running computer-based simulation modeling (e.g. Monte Carlo simulation, stochastic modeling, multimethod modeling) that makes all the modeling almost effortless. Modern usage of the term “computer simulation” may encompass virtually any computer-based representation.
COMPUTER SCIENCE
In computer science, simulation has some specialized meanings: Alan Turing used the term “simulation” to refer to what happens when a universal machine executes a state transition table (in modern terminology, a computer runs a program) that describes the state transitions, inputs and outputs of a subject discrete-state machine. The computer simulates the subject machine. Accordingly, in theoretical computer science the term simulation is a relation between state transition systems, useful in the study of operational semantics.
Less theoretically, an interesting application of computer simulation is to simulate computers using computers. In computer architecture, a type of simulator, typically called an emulator, is often used to execute a program that has to run on some inconvenient type of computer (for example, a newly designed computer that has not yet been built or an obsolete computer that is no longer available), or in a tightly controlled testing environment (see Computer architecture simulator and Platform virtualization). For example, simulators have been used to debug a microprogram or sometimes commercial application programs, before the program is downloaded to the target machine. Since the operation of the computer is simulated, all of the information about the computer’s operation is directly available to the programmer, and the speed and execution of the simulation can be varied at will.
Simulators may also be used to interpret fault trees, or test VLSI logic designs before they are constructed. Symbolic simulation uses variables to stand for unknown values.(davidovitch, 2009)
In the field of optimization, simulations of physical processes are often used in conjunction with evolutionary computation to optimize control strategies.
SIMULATION IN EDUCATION AND TRAINING
Simulation is extensively used for educational purposes. It is frequently used by way of adaptive hypermedia.
Simulation is often used in the training of civilian and military personnel.[7] This usually occurs when it is prohibitively expensive or simply too dangerous to allow trainees to use the real equipment in the real world. In such situations they will spend time learning valuable lessons in a “safe” virtual environment yet living a lifelike experience (or at least it is the goal). Often the convenience is to permit mistakes during training for a safety-critical system. There is a distinction, though, between simulations used for training and Instructional simulation.
Training simulations typically come in one of three categories:[8]
• “live” simulation (where actual players use genuine systems in a real environment);
• “virtual” simulation (where actual players use simulated systems in a synthetic environment [5]), or
• “constructive” simulation (where simulated players use simulated systems in a synthetic environment). Constructive simulation is often referred to as “wargaming” since it bears some resemblance to table-top war games in which players command armies of soldiers and equipment that move around a board.
In standardized tests, “live” simulations are sometimes called “high-fidelity”, producing “samples of likely performance”, as opposed to “low-fidelity”, “pencil-and-paper” simulations producing only “signs of possible performance”,[9] but the distinction between high, moderate and low fidelity remains relative, depending on the context of a particular comparison.
Simulations in education are somewhat like training simulations. They focus on specific tasks. The term ‘microworld’ is used to refer to educational simulations which model some abstract concept rather than simulating a realistic object or environment, or in some cases model a real world environment in a simplistic way so as to help a learner develop an understanding of the key concepts. Normally, a user can create some sort of construction within the microworld that will behave in a way consistent with the concepts being modeled. Seymour Papert was one of the first to advocate the value of microworlds, and the Logo programming environment developed by Papert is one of the most famous microworlds. As another example, the Global Challenge Award online STEM learning web site uses microworld simulations to teach science concepts related to global warming and the future of energy. Other projects for simulations in educations are Open Source Physics, NetSim etc.
Project Management Simulation is increasingly used to train students and professionals in the art and science of project management. Using simulation for project management training improves learning retention and enhances the learning process.
Social simulations may be used in social science classrooms to illustrate social and political processes in anthropology, economics, history, political science, or sociology courses, typically at the high school or university level. These may, for example, take the form of civics simulations, in which participants assume roles in a simulated society, or international relations simulations in which participants engage in negotiations, alliance formation, trade, diplomacy, and the use of force. Such simulations might be based on fictitious political systems, or be based on current or historical events. An example of the latter would be Barnard College’s Reacting to the Past series of historical educational games. The National Science Foundation has also supported the creation of reacting games that address science and math education. (A. shtub, 2009).
In recent years, there has been increasing use of social simulations for staff training in aid and development agencies. The Carana simulation, for example, was first developed by the United Nations Development Programme, and is now used in a very revised form by the World Bank for training staff to deal with fragile and conflict-affected countries.
COMMON USER INTERACTION SYSTEMS FOR VIRTUAL SIMULATIONS
Virtual simulations represent a specific category of simulation that utilizes simulation equipment to create a simulated world for the user. Virtual simulations allow users to interact with a virtual world. Virtual worlds operate on platforms of integrated software and hardware components. In this manner, the system can accept input from the user (e.g., body tracking, voice/sound recognition, physical controllers) and produce output to the user (e.g., visual display, aural display, haptic display). Virtual Simulations use the aforementioned modes of interaction to produce a sense of immersion for the user.
VIRTUAL SIMULATION INPUT HARDWARE
There is a wide variety of input hardware available to accept user input for virtual simulations. The following list briefly describes several of them:
Body tracking The motion capture method is often used to record the user’s movements and translate the captured data into inputs for the virtual simulation. For example, if a user physically turns their head, the motion would be captured by the simulation hardware in some way and translated to a corresponding shift in view within the simulation.
• Capture suits and/or gloves may be used to capture movements of users body parts. The systems may have sensors incorporated inside them to sense movements of different body parts (e.g., fingers). Alternatively, these systems may have exterior tracking devices or marks that can be detected by external ultrasound, optical receivers or electromagnetic sensors. Internal inertial sensors are also available on some systems. The units may transmit data either wirelessly or through cables.
• Eye trackers can also be used to detect eye movements so that the system can determine precisely where a user is looking at any given instant.
Physical controllers Physical controllers provide input to the simulation only through direct manipulation by the user. In virtual simulations, tactile feedback from physical controllers is highly desirable in a number of simulation environments.
• Omni directional treadmills can be used to capture the users locomotion as they walk or run.
• High fidelity instrumentation such as instrument panels in virtual aircraft cockpits provides users with actual controls to raise the level of immersion. For example, pilots can use the actual global positioning system controls from the real device in a simulated cockpit to help them practice procedures with the actual device in the context of the integrated cockpit system.
Voice/sound recognition This form of interaction may be used either to interact with agents within the simulation (e.g., virtual people) or to manipulate objects in the simulation (e.g., information). Voice interaction presumably increases the level of immersion for the user.
• Users may use headsets with boom microphones, lapel microphones or the room may be equipped with strategically located microphones.
Current research into user input systems Research in future input systems hold a great deal of promise for virtual simulations. Systems such as brain-computer interfaces (BCIs)Brain-computer interface offer the ability to further increase the level of immersion for virtual simulation users. Lee, Keinrath, Scherer, Bischof, Pfurtscheller proved that naïve subjects could be trained to use a BCI to navigate a virtual apartment with relative ease. Using the BCI, the authors found that subjects were able to freely navigate the virtual environment with relatively minimal effort. It is possible that these types of systems will become standard input modalities in future virtual simulation systems.
VIRTUAL SIMULATION OUTPUT HARDWARE
There is a wide variety of output hardware available to deliver stimulus to users in virtual simulations. The following list briefly describes several of them:
Visual display Visual displays provide the visual stimulus to the user.
• Stationary displays can vary from a conventional desktop display to 360-degree wrap around screens to stereo three-dimensional screens. Conventional desktop displays can vary in size from 15 to 60+ inches. Wrap around screens are typically utilized in what is known as a Cave Automatic Virtual Environment (CAVE) Cave Automatic Virtual Environment. Stereo three-dimensional screens produce three-dimensional images either with or without special glasses—depending on the design.
• Head mounted displays (HMDs) have small displays that are mounted on headgear worn by the user. These systems are connected directly into the virtual simulation to provide the user with a more immersive experience. Weight, update rates and field of view are some of the key variables that differentiate HMDs. Naturally, heavier HMDs are undesirable as they cause fatigue over time. If the update rate is too slow, the system is unable to update the displays fast enough to correspond with a quick head turn by the user. Slower update rates tend to cause simulation sickness and disrupt the sense of immersion. Field of view or the angular extent of the world that is seen at a given moment Field of view can vary from system to system and has been found to affect the users sense of immersion.
Aural display several different types of audio systems exist to help the user hear and localize sounds spatially. Special software can be used to produce 3D audio effects 3D audio to create the illusion that sound sources are placed within a defined three-dimensional space around the user.
• Stationary conventional speaker systems may be used provide dual or multi-channel surround sound. However, external speakers are not as effective as headphones in producing 3D audio effects.
• Conventional headphones offer a portable alternative to stationary speakers. They also have the added advantages of masking real world noise and facilitate more effective 3D audio sound effects.
Haptic display These displays provide sense of touch to the user Haptic technology. This type of output is sometimes referred to as force feedback.
• Tactile tile displays use different types of actuators such as inflatable bladders, vibrators, low frequency sub-woofers, pin actuators and/or thermo-actuators to produce sensations for the user.
• End effector displays can respond to users inputs with resistance and force. These systems are often used in medical applications for remote surgeries that employ robotic instruments.
Vestibular display These displays provide a sense of motion to the user Motion simulator. They often manifest as motion bases for virtual vehicle simulation such as driving simulators or flight simulators. Motion bases are fixed in place but use actuators to move the simulator in ways that can produce the sensations pitching, yawing or rolling. The simulators can also move in such a way as to produce a sense of acceleration on all axes (e.g., the motion base can produce the sensation of falling).
CLINICAL HEALTHCARE SIMULATORS
Medical simulators are increasingly being developed and deployed to teach therapeutic and diagnostic procedures as well as medical concepts and decision making to personnel in the health professions. Simulators have been developed for training procedures ranging from the basics such as blood draw, to laparoscopic surgery and trauma care. They are also important to help on prototyping new devices for biomedical engineering problems. Currently, simulators are applied to research and develop tools for new therapies, treatments and early diagnosis in medicine.
Many medical simulators involve a computer connected to a plastic simulation of the relevant anatomy. Sophisticated simulators of this type employ a life size mannequin that responds to injected drugs and can be programmed to create simulations of life-threatening emergencies. In other simulations, visual components of the procedure are reproduced by computer graphics techniques, while touch-based components are reproduced by haptic feedback devices combined with physical simulation routines computed in response to the user’s actions. Medical simulations of this sort will often use 3D CT or MRI scans of patient data to enhance realism. Some medical simulations are developed to be widely distributed (such as web-enabled simulations and procedural simulations that can be viewed via standard web browsers) and can be interacted with using standard computer interfaces, such as the keyboard and mouse.
Another important medical application of a simulator — although, perhaps, denoting a slightly different meaning of simulator — is the use of a placebo drug, a formulation that simulates the active drug in trials of drug efficacy).
IMPROVING PATIENT SAFETY
Innovative simulation training solutions are now being used to train medical professionals in an attempt to reduce the number of safety concerns that have adverse effects on the patients. However, according to the article Does Simulation Improve Patient Safety? Self-efficacy, Competence, Operational Performance, and Patient Safety (Nishisaki A., Keren R., and Nadkarni, V., 2007), the jury is still out. Nishisaki states that “There is good evidence that simulation training improves provider and team self-efficacy and competence on manikins. There is also good evidence that procedural simulation improves actual operational performance in clinical settings. However, no evidence yet shows that crew resource management training through simulation, despite its promise, improves team operational performance at the bedside. Although evidence that simulation-based training actually improves patient outcome has been slow to accrue, today the ability of simulation to provide hands-on experience that translates to the operating room is no longer in doubt.
One such attempt to improve patient safety through the use of simulations training is pediatric care to deliver just-in-time service or/and just-in-place. This training consists of 20 minutes of simulated training just before workers report to shift. It is hoped that the recentness of the training will increase the positive and reduce the negative results that have generally been associated with the procedure. The purpose of this study is to determine if just-in-time training improves patient safety and operational performance of orotracheal intubation and decrease occurrences of undesired associated events and “to test the hypothesis that high fidelity simulation may enhance the training efficacy and patient safety in simulation settings.” The conclusion as reported in Abstract P38: Just-In-Time Simulation Training Improves ICU Physician Trainee Airway Resuscitation Participation without Compromising Procedural Success or Safety (Nishisaki A., 2008), were that simulation training improved resident participation in real cases; but did not sacrifice the quality of service. It could be therefore hypothesized that by increasing the number of highly trained residents through the use of simulation training, that the simulation training does in fact increase patient safety. This hypothesis would have to be researched for validation and the results may or may not generalize to other situations.
COMPUTER STIMULATORS
Simulators have been proposed as an ideal tool for assessment of students for clinical skills. For patients, “cybertherapy” can be used for sessions simulating traumatic experiences, from fear of heights to social anxiety.
Programmed patients and simulated clinical situations, including mock disaster drills, have been used extensively for education and evaluation. These “lifelike” simulations are expensive, and lack reproducibility. A fully functional “3Di” simulator would be the most specific tool available for teaching and measurement of clinical skills. Gaming platforms have been applied to create these virtual medical environments to create an interactive method for learning and application of information in a clinical context.
Immersive disease state simulations allow a doctor or HCP to experience what a disease actually feels like. Using sensors and transducers symptomatic effects can be delivered to a participant allowing them to experience the patients disease state.
Such a simulator meets the goals of an objective and standardized examination for clinical competence. This system is superior to examinations that use “standard patients” because it permits the quantitative measurement of competence, as well as reproducing the same objective findings.
SIMULATION AND MANUFACTURING
Manufacturing represents one of the most important applications of Simulation. This technique represents a valuable tool used by engineers when evaluating the effect of capital investment in equipments and physical facilities like factory plants, warehouses, and distribution centers. Simulation can be used to predict the performance of an existing or planned system and to compare alternative solutions for a particular design problem.
Another important goal of manufacturing-simulations is to quantify system performance. Common measures of system performance include the following:
• Throughput under average and peak loads;
• System cycle time (how long it take to produce one part);
• Utilization of resource, labor, and machines;
• Bottlenecks and choke points;
• Queuing at work locations;
• Queuing and delays caused by material-handling devices and systems;
• WIP storages needs;
• Staffing requirements;
• Effectiveness of scheduling systems;
• Effectiveness of control systems.
MORE EXAMPLES OF SIMULATION
Automobiles
An automobile simulator provides an opportunity to reproduce the characteristics of real vehicles in a virtual environment. It replicates the external factors and conditions with which a vehicle interacts enabling a driver to feel as if they are sitting in the cab of their own vehicle. Scenarios and events are replicated with sufficient reality to ensure that drivers become fully immersed in the experience rather than simply viewing it as an educational experience.
The simulator provides a constructive experience for the novice driver and enables more complex exercises to be undertaken by the more mature driver. For novice drivers, truck simulators provide an opportunity to begin their career by applying best practice. For mature drivers, simulation provides the ability to enhance good driving or to detect poor practice and to suggest the necessary steps for remedial action. For companies, it provides an opportunity to educate staff in the driving skills that achieve reduced maintenance costs, improved productivity and, most importantly, to ensure the safety of their actions in all possible situations.
Biomechanics
An open-source simulation platform for creating dynamic mechanical models built from combinations of rigid and deformable bodies, joints, constraints, and various force actuators. It is specialized for creating biomechanical models of human anatomical structures, with the intention to study their function and eventually assist in the design and planning of medical treatment.
A biomechanics simulator is used to analyze walking dynamics, study sports performance, simulate surgical procedures, analyze joint loads, design medical devices, and animate human and animal movement.
A neuromechanical simulator that combines biomechanical and biologically realistic neural network simulation. It allows the user to test hypotheses on the neural basis of behavior in a physically accurate 3-D virtual environment.
City and urban
A city simulator can be a city-building game but can also be a tool used by urban planners to understand how cities are likely to evolve in response to various policy decisions. AnyLogic is an example of modern, large-scale urban simulators designed for use by urban planners. City simulators are generally agent-based simulations with explicit representations for land use and transportation. UrbanSim and LEAM are examples of large-scale urban simulation models that are used by metropolitan planning agencies and military bases for land use and transportation planning.
Classroom of the future
The “classroom of the future” will probably contain several kinds of simulators, in addition to textual and visual learning tools. This will allow students to enter the clinical years better prepared, and with a higher skill level. The advanced student or postgraduate will have a more concise and comprehensive method of retraining — or of incorporating new clinical procedures into their skill set — and regulatory bodies and medical institutions will find it easier to assess the proficiency and competency of individuals.
The classroom of the future will also form the basis of a clinical skills unit for continuing education of medical personnel; and in the same way that the use of periodic flight training assists airline pilots, this technology will assist practitioners throughout their career.
The simulator will be more than a “living” textbook, it will become an integral a part of the practice of medicine. The simulator environment will also provide a standard platform for curriculum development in institutions of medical education.
Communication satellites
Modern satellite communications systems (SatCom) are often large and complex with many interacting parts and elements. In addition, the need for broadband connectivity on a moving vehicle has increased dramatically in the past few years for both commercial and military applications. To accurately predict and deliver high quality of service, satcom system designers have to factor in terrain as well as atmospheric and meteorological conditions in their planning. To deal with such complexity, system designers and operators increasingly turn towards computer models of their systems to simulate real world operational conditions and gain insights into usability and requirements prior to final product sign-off. Modeling improves the understanding of the system by enabling the SatCom system designer or planner to simulate real world performance by injecting the models with multiple hypothetical atmospheric and environmental conditions. Simulation is often used in the training of civilian and military personnel. This usually occurs when it is prohibitively expensive or simply too dangerous to allow trainees to use the real equipment in the real world. In such situations they will spend time learning valuable lessons in a “safe” virtual environment yet living a lifelike experience (or at least it is the goal). Often the convenience is to permit mistakes during training for a safety-critical system.
Digital Lifecycle
Simulation solutions are being increasingly integrated with CAx (CAD, CAM, CAE….) solutions and processes. The use of simulation throughout the product lifecycle, especially at the earlier concept and design stages, has the potential of providing substantial benefits. These benefits range from direct cost issues such as reduced prototyping and shorter time-to-market, to better performing products and higher margins. However, for some companies, simulation has not provided the expected benefits.
The research firm Aberdeen Group has found that nearly all best-in-class manufacturers use simulation early in the design process as compared to 3 or 4 laggards who do not.
The successful use of simulation, early in the lifecycle, has been largely driven by increased integration of simulation tools with the entire CAD, CAM and PLM solution-set. Simulation solutions can now function across the extended enterprise in a multi-CAD environment, and include solutions for managing simulation data and processes and ensuring that simulation results are made part of the product lifecycle history. The ability to use simulation across the entire lifecycle has been enhanced through improved user interfaces such as tailorable user interfaces and “wizards” which allow all appropriate PLM participants to take part in the simulation process.
Disaster preparedness
Simulation training has become a method for preparing people for disasters. Simulations can replicate emergency situations and track how learners respond thanks to a lifelike experience. Disaster preparedness simulations can involve training on how to handle terrorism attacks, natural disasters, pandemic outbreaks, or other life-threatening emergencies.
One organization that has used simulation training for disaster preparedness is CADE (Center for Advancement of Distance Education). CADE has used a video game to prepare emergency workers for multiple types of attacks. As reported by News-Medical.Net, ”The video game is the first in a series of simulations to address bioterrorism, pandemic flu, smallpox and other disasters that emergency personnel must prepare for.” Developed by a team from the University of Illinois at Chicago (UIC), the game allows learners to practice their emergency skills in a safe, controlled environment.
The Emergency Simulation Program (ESP) at the British Columbia Institute of Technology (BCIT), Vancouver, British Columbia, Canada is another example of an organization that uses simulation to train for emergency situations. ESP uses simulation to train on the following situations: forest fire fighting, oil or chemical spill response, earthquake response, law enforcement, municipal fire fighting, hazardous material handling, military training, and response to terrorist attack One feature of the simulation system is the implementation of “Dynamic Run-Time Clock,” which allows simulations to run a ‘simulated’ time frame, “’speeding up’ or ‘slowing down’ time as desired” Additionally, the system allows session recordings, picture-icon based navigation, file storage of individual simulations, multimedia components, and launch external applications.
At the University of Québec in Chicoutimi, a research team at the outdoor research and expertise laboratory (Laboratoire d’Expertise et de Recherche en Plein Air – LERPA) specializes in using wilderness backcountry accident simulations to verify emergency response coordination.
Instructionally, the benefits of emergency training through simulations are that learner performance can be tracked through the system. This allows the developer to make adjustments as necessary or alert the educator on topics that may require additional attention. Other advantages are that the learner can be guided or trained on how to respond appropriately before continuing to the next emergency segment—this is an aspect that may not be available in the live-environment. Some emergency training simulators also allows for immediate feedback, while other simulations may provide a summary and instruct the learner to engage in the learning topic again.
In a live-emergency situation, emergency responders do not have time to waste. Simulation-training in this environment provides an opportunity for learners to gather as much information as they can and practice their knowledge in a safe environment. They can make mistakes without risk of endangering lives and be given the opportunity to correct their errors to prepare for the real-life emergency.
Engineering, technology, and processes
Simulation is an important feature in engineering systems or any system that involves many processes. For example in electrical engineering, delay lines may be used to simulate propagation delay and phase shift caused by an actual transmission line. Similarly, dummy loads may be used to simulate impedance without simulating propagation, and is used in situations where propagation is unwanted. A simulator may imitate only a few of the operations and functions of the unit it simulates. Contrast with: emulate.
Most engineering simulations entail mathematical modeling and computer assisted investigation. There are many cases, however, where mathematical modeling is not reliable. Simulation of fluid dynamics problems often require both mathematical and physical simulations. In these cases the physical models require dynamic similitude. Physical and chemical simulations have also direct realistic uses, rather than research uses; in chemical engineering, for example, process simulations are used to give the process parameters immediately used for operating chemical plants, such as oil refineries. Simulators are also used for plant operator training. It is called Operator Training Simulator (OTS) and has been widely adopted by many industries from chemical to oil & gas and to power industry. This created a safe and realistic virtual environment to train board operators and engineers. Mimic is capable of providing high fidelity dynamic models of nearly all chemical plants for operator training and control system testing.
Equipment
Due to the dangerous and expensive nature of training on heavy equipment, simulation has become a common solution across many industries. Types of simulated equipment include cranes, mining reclaimers and construction equipment, among many others. Often the simulation units will include pre-built scenarios by which to teach trainees, as well as the ability to customize new scenarios. Such equipment simulators are intended to create a safe and cost effective alternative to training on live equipment.
Ergonomics
Ergonomic simulation involves the analysis of virtual products or manual tasks within a virtual environment. In the engineering process, the aim of ergonomics is to develop and to improve the design of products and work environments. Ergonomic simulation utilizes an anthropometric virtual representation of the human, commonly referenced as a mannequin or Digital Human Models (DHMs), to mimic the postures, mechanical loads, and performance of a human operator in a simulated environment such as an airplane, automobile, or manufacturing facility. DHMs are recognized as evolving and valuable tool for performing proactive ergonomics analysis and design. The simulations employ 3D-graphics and physics-based models to animate the virtual humans. Ergonomics software uses inverse kinematics (IK) capability for posing the DHMs. Several ergonomic simulation tools have been developed including Jack, SAFEWORK, RAMSIS, and SAMMIE.
The software tools typically calculate biomechanical properties including individual muscle forces, joint forces and moments. Most of these tools employ standard ergonomic evaluation methods such as the NIOSH lifting equation and Rapid Upper Limb Assessment (RULA). Some simulations also analyze physiological measures including metabolism, energy expenditure, and fatigue limits Cycle time studies, design and process validation, user comfort, reachability, and line of sight are other human-factors that may be examined in ergonomic simulation packages.
Modeling and simulation of a task can be performed by manually manipulating the virtual human in the simulated environment. Some ergonomics simulation software permits interactive, real-time simulation and evaluation through actual human input via motion capture technologies. However, motion capture for ergonomics requires expensive equipment and the creation of props to represent the environment or product.
Some applications of ergonomic simulation in include analysis of solid waste collection, disaster management tasks, interactive gaming, automotive assembly line, virtual prototyping of rehabilitation aids, and aerospace product design. Ford engineers use ergonomics simulation software to perform virtual product design reviews. Using engineering data, the simulations assist evaluation of assembly ergonomics. The company uses Siemen’s Jack and Jill ergonomics simulation software in improving worker safety and efficiency, without the need to build expensive prototypes.
Finance
In finance, computer simulations are often used for scenario planning. Risk-adjusted net present value, for example, is computed from well-defined but not always known (or fixed) inputs. By imitating the performance of the project under evaluation, simulation can provide a distribution of NPV over a range of discount rates and other variables.
Simulations are frequently used in financial training to engage participants in experiencing various historical as well as fictional situations. There are stock market simulations, portfolio simulations, risk management simulations or models and forex simulations. Such simulations are typically based on stochastic asset models. Using these simulations in a training program allows for the application of theory into a something akin to real life. As with other industries, the use of simulations can be technology or case-study driven.
Flight
Flight Simulation Training Devices (FSTD) are used to train pilots on the ground. In comparison to training in an actual aircraft, simulation based training allows for the training of maneuvers or situations that may be impractical (or even dangerous) to perform in the aircraft, while keeping the pilot and instructor in a relatively low-risk environment on the ground. For example, electrical system failures, instrument failures, hydraulic system failures, and even flight control failures can be simulated without risk to the pilots or an aircraft.
Instructors can also provide students with a higher concentration of training tasks in a given period of time than is usually possible in the aircraft. For example, conducting multiple instrument approaches in the actual aircraft may require significant time spent repositioning the aircraft, while in a simulation, as soon as one approach has been completed, the instructor can immediately preposition the simulated aircraft to an ideal (or less than ideal) location from which to begin the next approach.
Flight simulation also provides an economic advantage over training in an actual aircraft. Once fuel, maintenance, and insurance costs are taken into account, the operating costs of an FSTD are usually substantially lower than the operating costs of the simulated aircraft. For some large transport category airplanes, the operating costs may be several times lower for the FSTD than the actual aircraft.
Some people who use simulator software, especially flight simulator software, build their own simulator at home. Some people — to further the realism of their homemade simulator — buy used cards and racks that run the same software used by the original machine. While this involves solving the problem of matching hardware and software — and the problem that hundreds of cards plug into many different racks — many still find that solving these problems is well worthwhile. Some are so serious about realistic simulation that they will buy real aircraft parts, like complete nose sections of written-off aircraft, at aircraft boneyards. This permits people to simulate a hobby that they are unable to pursue in real life.
Marine
Bearing resemblance to flight simulators, marine simulators train ships’ personnel. The most common marine simulators include:
• Ship’s bridge simulators
• Engine room simulators
• Cargo handling simulators
• Communication / GMDSS simulators
• ROV simulators
Simulators like these are mostly used within maritime colleges, training institutions and navies. They often consist of a replication of a ships’ bridge, with operating console(s), and a number of screens on which the virtual surroundings are projected.
Military
Military simulations, also known informally as war games, are models in which theories of warfare can be tested and refined without the need for actual hostilities. They exist in many different forms, with varying degrees of realism. In recent times, their scope has widened to include not only military but also political and social factors (for example, the NationLab series of strategic exercises in Latin America). While many governments make use of simulation, both individually and collaboratively, little is known about the

CONCLUSION
Simulation can be a powerful alternative approach to doing science. Simulation makes it possible to study problems that are not easily addressed, or may be impossible to address, with other scientific approaches. Because organizations are complex systems and many of their characteristics and behaviors are often inaccessible to researchers, especially over long time frames, simulation can be a particularly useful research tool for organization theorists.
Simulation analysis offers a variety of benefits. It can be useful in developing theory and in guiding empirical work. It can provide insight into the operation of complex systems and explore their behavior. It can examine the consequences of theoretical arguments and assumptions, generate alternative explanations and hypotheses, and test the validity of explanations. Through its requirement for formal modeling, it imposes theoretical rigor and promotes scientific progress.
Simulation research also has problems and limitations. The value of simulation findings rests on the validity of the simulation model, which frequently must be constructed with little guidance from previous work and is prone to problems of misspecification. Experimental designs are often inadequate. Simulation work is technically demanding and highly susceptible to errors in computer programming. The data generated by simulations are not “real” and techniques for their analysis are limited. So claims based on simulation findings are necessarily qualified.
The role of simulation is not well understood by much of the organizational research community. Simulation is a legitimate, disciplined approach to scientific investigation, and its value needs to be recognized and appreciated. Properly used and kept in appropriate perspective, computer simulation is a useful research tool that opens up new avenues for organizational research. The computer simulations discussed in this paper provide a sample of a future direction in organizational research, and many samples in the future of organizational research are likely to be generated by computer simulations.

REFERENCES
• J. Banks, J. Carson, B. Nelson, D. Nicol (2001). Discrete-Event System Simulation. Prentice
Hall. p. 3. ISBN 0-13-088702-1.
• • In the words of the Simulation article in Encyclopedia of Computer Science, “designing a
model of a real or imagined system and conducting experiments with that model”.
• • Sokolowski, J.A., Banks, C.M. (2009). Principles of Modeling and Simulation. Hoboken, NJ:
Wiley. p. 6. ISBN 978-0-470-28943-3.
• • For example in computer graphics SIGGRAPH 2007 | For Attendees | Papers
Doc:Tutorials/Physics/BSoD – BlenderWiki.
• • Thales defines synthetic environment as “the counterpart to simulated models of sensors,
platforms and other active objects” for “the simulation of the external factors that affect them”[1] while other vendors use the term for more visual, virtual reality-style simulators [2].
• • For a popular research project in the field of biochemistry where “computer simulation is
particularly well suited to address these questions”Folding@home – Main, see Folding@Home.
• • For an academic take on a training simulator, see e.g. Towards Building an Interactive, \ Scenario-based Training Simulator, for medical application
Classification used by the Defense Modeling and Simulation Office.
“High Versus Low Fidelity Simulations: Does the Type of Format Affect Candidates’ Performance or Perceptions?”
• • Davidovitch, L., A. Parush and A. Shtub, Simulation-based Learning: The Learning-
Forgetting-Relearning Process and Impact of Learning History, Computers & Education, April 2008, Vol. 50, No. 3, 866–880
• • Davidovitch, L., A. Parush and A. Shtub, The Impact of Functional Fidelity in Simulator based Learning of Project Management,International Journal of Engineering Education, March 2009, Vol. 25, No. 2, 333–340(8)

A SYSTEMATIC REVIEW OF COLORECTAL CANCER


A SYSTEMATIC REVIEW OF COLORECTAL CANCER
INTRODUCTION
Colorectal cancer (CRC) is a common and lethal disease. The risk of developing CRC is influenced by both environmental and genetic factors. Colorectal cancer (also known as colon cancer, rectal cancer, or bowel cancer) is the development of cancer in the colon or rectum (parts of the large intestine). It is due to the abnormal growth of cells that have the ability to invade or spread to other parts of the body. Signs and symptoms may include blood in the stool, a change in bowel movements, weight loss, and feeling tired all the time.
Risk factors for colorectal cancer include lifestyle, older age, and inherited genetic disorders. Other risk factors include diet, smoking, alcohol, lack of physical activity, family history of colon cancer and colon polyps, presence of colon polyps, race, exposure to radiation, and even other diseases such as diabetes and obesity. Genetic disorders only occur in a small fraction of the population. A diet high in red, processed meat, while low in fiber increases the risk of colorectal cancer. Other diseases such as inflammatory bowel disease, which includes Crohn’s disease and ulcerative colitis, can increase the risk of colorectal cancer. Some of the inherited genetic disorders that can cause colorectal cancer include familial adenomatous polyposis and hereditary non-polyposis colon cancer; however, these represent less than 5% of cases. It typically starts as a benign tumor, often in the form of a polyp, which over time becomes cancerous.
Bowel cancer may be diagnosed by obtaining a sample of the colon during a sigmoidoscopy or colonoscopy. This is then followed by medical imaging to determine if the disease has spread. Screening is effective for preventing and decreasing deaths from colorectal cancer. Screening is recommended starting from the age of 50 to 75. During colonoscopy, small polyps may be removed if found. If a large polyp or tumor is found, a biopsy may be performed to check if it is cancerous. Aspirin and other non-steroidal anti-inflammatory drugs decrease the risk. Their general use is not recommended for this purpose, however, due to side effects.
REVIEW OF COLORECTAL CANCER
Colorectal cancer is a disease originating from the epithelial cells lining the colon or rectum of the gastrointestinal tract, most frequently as a result of mutations in the Wnt signaling pathway that increase signaling activity. The mutations can be inherited or acquired, and most probably occur in the intestinal crypt stem cell. The most commonly mutated gene in all colorectal cancer is the APC gene, which produces the APC protein. The APC protein prevents the accumulation of β-catenin protein. Without APC, β-catenin accumulates to high levels and translocates (moves) into the nucleus, binds to DNA, and activates the transcription of proto-oncogenes. These genes are normally important for stem cell renewal and differentiation, but when inappropriately expressed at high levels, they can cause cancer. While APC is mutated in most colon cancers, some cancers have increased β-catenin because of mutations in β-catenin (CTNNB1) that block its own breakdown, or have mutations in other genes with function similar to APC such as AXIN1, AXIN2, TCF7L2, or NKD1.
Beyond the defects in the Wnt signaling pathway, other mutations must occur for the cell to become cancerous. The p53 protein, produced by the TP53 gene, normally monitors cell division and kills cells if they have Wnt pathway defects. Eventually, a cell line acquires a mutation in the TP53 gene and transforms the tissue from an benign epithelial tumor into an invasive epithelial cell cancer. Sometimes the gene encoding p53 is not mutated, but another protective protein named BAX is mutated instead.
Other proteins responsible for programmed cell death that are commonly deactivated in colorectal cancers are TGF-β and DCC (Deleted in Colorectal Cancer). TGF-β has a deactivating mutation in at least half of colorectal cancers. Sometimes TGF-β is not deactivated, but a downstream protein named SMAD is deactivated. DCC commonly has a deleted segment of a chromosome in colorectal cancer.
Some genes are oncogenes: they are overexpressed in colorectal cancer. For example, genes encoding the proteins KRAS, RAF, and PI3K, which normally stimulate the cell to divide in response to growth factors, can acquire mutations that result in over-activation of cell proliferation. The chronological order of mutations is sometimes important. If a previous APC mutation occurred, a primary KRAS mutation often progresses to cancer rather than a self-limiting hyperplastic or borderline lesion. PTEN, a tumor suppressor, normally inhibits PI3K, but can sometimes become mutated and deactivated
Diagnosis
Diagnosis of colorectal cancer is via sampling of areas of the colon suspicious for possible tumor development typically done during colonoscopy or sigmoidoscopy, depending on the location of the lesion. The extent of the disease is then usually determined by a CT scan of the chest, abdomen and pelvis. There are other potential imaging test such as PET and MRI which may be used in certain cases. Colon cancer staging is done next and based on the TNM system which is determined by how much the initial tumor has spread, if and where lymph nodes are involved, and the extent of metastatic disease.
The microscopic cellular characteristics of the tumor are usually reported from the analysis of tissue taken from a biopsy or surgery. A pathology report will usually contain a description of cell type and grade. The most common colon cancer cell type is adenocarcinoma which accounts for 98% of cases. Other, rarer types include lymphoma and squamous cell carcinoma.
Treatment
Treatments used for colorectal cancer may include some combination of surgery, radiation therapy, chemotherapy and targeted therapy. Cancers that are confined within the wall of the colon may be curable with surgery while cancer that has spread widely are usually not curable, with management focusing on improving quality of life and symptoms. Five year survival rates in the United States are around 65%. This, however, depends on how advanced the cancer is, whether or not all the cancer can be removed with surgery, and the person’s overall health. Globally, colorectal cancer is the third most common type of cancer making up about 10% of all cases. In 2012 there were 1.4 million new cases and 694,000 deaths from the disease. It is more common in developed countries, where more than 65% of cases are found.[4] It is less common in women than men.

Colorectal cancer

Signs and symptoms

Location and appearance of two example colorectal tumors
The signs and symptoms of colorectal cancer depend on the location of the tumor in the bowel, and whether it has spread elsewhere in the body (metastasis). The classic warning signs include: worsening constipation, blood in the stool, decrease in stool caliber (thickness), loss of appetite, loss of weight, and nausea or vomiting in someone over 50 years old. While rectal bleeding or anemia are high-risk features in those over the age of 50, other commonly-described symptoms including weight loss and change in bowel habit are typically only concerning if associated with bleeding.
Cause
Greater than 75–95% of colon cancer occurs in people with little or no genetic risk. Other risk factors include older age, male gender, high intake of fat, alcohol or red meat, obesity, smoking, and a lack of physical exercise. Approximately 10% of cases are linked to insufficient activity. The risk for alcohol appears to increase at greater than one drink per day. Drinking 5 glasses of water a day is linked to a decrease in the risk of colorectal cancer and adenomatous polyps.
EPIDEMOLOGY
CRC incidence and mortality rates vary markedly around the world. Globally, CRC is the third most commonly diagnosed cancer in males and the second in females, with 1.4 million new cases and almost 694,000 deaths estimated to have occurred in 2012. Rates are substantially higher in males than in females. Global, country-specific incidence and mortality rates are available in the World Health Organization GLOBOCAN database.
In the United States, both the incidence and mortality have been slowly but steadily decreasing. Annually, approximately 132,700 new cases of large bowel cancer are diagnosed, of which 93,090 are colon and the remainder rectal cancers. Annually, approximately 49,700 Americans die of CRC, accounting for approximately 8 percent of all cancer deaths.
Incidence — Globally, the incidence of CRC varies over 10-fold. The highest incidence rates are in Australia and New Zealand, Europe, and North America, and the lowest rates are found in Africa and South-Central Asia (CRC incidence by world area). These geographic differences appear to be attributable to differences in dietary and environmental exposures that are imposed upon a background of genetically determined susceptibility.
Low socioeconomic status (SES) is also associated with an increased risk for the development of colorectal cancer; one study estimated the CRC risk to be about 30 percent increased in the lowest as compared with the highest SES quintile. Potentially modifiable behaviors such as physical inactivity, unhealthy diet, smoking, and obesity are thought to account for a substantial proportion (estimates of one-third to one-half) of the socioeconomic disparity in risk of new onset colorectal cancer. Other factors, particularly lower rates of CRC screening, also contribute substantively to SES differences in CRC risk.
Globally more than 1 million people get colorectal cancer every year resulting in about 715,000 deaths as of 2010 up from 490,000 in 1990. As of 2012, it is the second most common cause of cancer in women (9.2% of diagnoses) and the third most common in men (10.0%) with it being the fourth most common cause of cancer death after lung, stomach, and liver cancer. It is more common in developed than developing countries. Globally incidences vary 10-fold with highest rates in Australia, New Zealand, Europe and the US and lowest rates in Africa and South-Central Asia.
Worldwide, colorectal cancer represents 9.4% of all incident cancer in men and 10.1% in women. Colorectal cancer, however, is not uniformly common throughout the world.3 There is a large geographic difference in the global distribution of colorectal cancer. Colorectal cancer is mainly a disease of developed countries with a Western culture.3 In fact, the developed world accounts for over 63% of all cases. The incidence rate varies up to 10-fold between countries with the highest rates and those with the lowest rates. It ranges from more than 40 per 100,000 people in the United States, Australia, New Zealand, and Western Europe to less than 5 per 100,000 in Africa and some parts of Asia. However, these incidence rates may be susceptible to ascertainment bias; there may be a high degree of underreporting in developing countries.

CONCLUSION
Colorectal cancer is a major cause of morbidity and mortality throughout the world. It accounts for over 9% of all cancer incidence. It is the third most common cancer worldwide and the fourth most common cause of death. It affects men and women almost equally, with just over 1 million new cases recorded in 2002, the most recent year for which international estimates are available. Countries with the highest incidence rates include Australia, New Zealand, Canada, the United States, and parts of Europe. The countries with the lowest risk include China, India, and parts of Africa and South America.
Lastly, Colorectal cancer survival is highly dependent upon stage of disease at diagnosis, and typically ranges from a 90% 5-year survival rate for cancers detected at the localized stage; 70% for regional; to 10% for people diagnosed for distant metastatic cancer. In general, the earlier the stage at diagnosis, the higher the chance of survival.

REFERENCES
“Colon Cancer Treatment (PDQ®)”. NCI. 2014-05-12. Retrieved 29 June 2014.
“Defining Cancer”. National Cancer Institute. Retrieved 10 June 2014.
“General Information About Colon Cancer”. NCI. 2014-05-12. Retrieved 29 June 2014.
World Cancer Report 2014. World Health Organization. 2014. pp. Chapter 5.5. ISBN 9283204298.
“Colorectal Cancer Prevention (PDQ®)”. National Cancer Institute. 2014-02-27. Retrieved 29 June 2014.
“Screening for Colorectal Cancer”. U.S. Preventive Services Task Force. October 2008. Retrieved 29 June 2014.
Thorat, MA; Cuzick, J (Dec 2013). “Role of aspirin in cancer prevention.”. Current Oncology Reports 15 (6): 533–40. doi:10.1007/s11912-013-0351-3. PMID 24114189.

A SYSTEMATIC REVIEW ON EPILEPSY


A SYSTEMATIC REVIEW ON EPILEPSY
INTRODUCTION
Epilepsy is a group of neurological diseases characterized by epileptic seizures. Epileptic seizures are episodes that can vary from brief and nearly undetectable to long periods of vigorous shaking. In epilepsy, seizures tend to recur, and have no immediate underlying cause while seizures that occur due to a specific cause are not deemed to represent epilepsy.
The cause of most cases of epilepsy is unknown, although some people develop epilepsy as the result of brain injury, stroke, brain tumor, and substance use disorders. Genetic mutations are linked to a small proportion of the diorder. Epileptic seizures are the result of excessive and abnormal cortical nerve cell activity in the brain. The diagnosis typically involves ruling out other conditions that might cause similar symptoms such as fainting. Additionally, making the diagnosis involves determining if any other cause of seizures is present such as alcohol withdrawal or electrolyte problems. This may be done by imaging the brain and performing blood tests. Epilepsy can often be confirmed with an electroencephalogram (EEG) but a normal test does not rule out the condition
EPILEPSY: A REVIEW
Epidemiology
Epilepsy is one of the most common serious neurological disorders affecting about 65 million people globally. It affects 1% of the population by age 20 and 3% of the population by age 75. It is more common in males than females with the overall difference being small. Most of those with the disorder (80%) are in the developing world.
The number of people who currently have active epilepsy is in the range 5–10 per 1,000, with active epilepsy defined as someone with epilepsy who has had a least one seizure in the last five years. Epilepsy begins each year in 40–70 per 100,000 in developed countries and 80–140 per 100,000 in developing countries. Poverty is a risk and includes both being from a poor country and being poor relative to others within one’s country. In the developed world epilepsy most commonly starts either in the young or in the old. In the developing world its onset is more common in older children and young adults due to the higher rates of trauma and infectious diseases. In developed countries the number of cases a year has decreased in children and increased among the elderly between the 1970s and 2003. This has been attributed partly to better survival following strokes in the elderly
People with epilepsy tend to have recurrent seizures (fits). The seizures occur because of a sudden surge of electrical activity in the brain – there is an overload of electrical activity in the brain. This causes a temporary disturbance in the messaging systems between brain cells. During a seizure the patient’s brain becomes “halted” or “mixed up”.
Every function in our bodies is triggered by messaging systems in our brain. What a patient with epilepsy experiences during a seizure will depend on what part of his/her brain that epileptic activity starts, and how widely and quickly it spreads from that area. Consequently, there are several types of seizures and each patient will have epilepsy in his/her own unique way.
The word “epilepsy” comes from the Greek word epi meaning “upon, at, close upon”, and the Greek word Leptos meaning “seizure”. From those roots we have the Old French word epilepsie, and Latin word epilepsia and the Greek words epilepsia and epilepsies.

How common is epilepsy?
Approximately 50 out of every 100,000 people develop epilepsy each year in industrialized nations.

About 50 million people worldwide are said to be affected by epilepsy and seizures.
Epilepsy in USA – according to The Epilepsy Foundation over 3 million Americans are affected by epilepsy and seizures. About 200,000 new cases of seizures and epilepsy occur in the USA each year. 10% of all Americans will experience a seizure some time during their lifetime.
Epilepsy in UK – according to Epilepsy Action 460,000 people in the United Kingdom have epilepsy.
Epilepsy worldwide – according to The National Society for Epilepsy (UK) about 50 million people have epilepsy globally.
Causes
Epilepsy can have both genetic and acquired causes, with interaction of these factors in many cases.[38] Established acquired causes include serious brain trauma, stroke, tumours and problems in the brain as a result of a previous infection.[38] In about 60% of cases the cause is unknown.[3][18] Epilepsies caused by genetic, congenital, or developmental conditions are more common among younger people, while brain tumors and strokes are more likely in older people.[18] Seizures may also occur as a consequence of other health problems;[23] if they occur right around a specific cause, such as a stroke, head injury, toxic ingestion or metabolic problem, they are known as acute symptomatic seizures and are in the broader classification of seizure-related disorders rather than epilepsy itself.
Diagnosis
The diagnosis of epilepsy is typically made based on observation of the seizure onset and the underlying cause. Neuroimaging to look at the function of the brain such as electroencephalogram and structure of the brain such as MRI are also usually part of the workup.. While figuring out a specific epileptic syndrome is often attempted, it is not always possible. Video and EEG monitoring may be useful in difficult cases
Epilepsy and life expectancy
Researchers from the University of Oxford and University College London reported in The Lancet in 2013 that premature death is 11 times more common among people with epilepsy compared to the rest of the population. The authors added that the risk is even greater if a person with epilepsy also has a mental illness.
Suicides, accidents and assaults accounted for 15.8% of early deaths. Among these 15.8%, the majority had been diagnosed with a mental disorder.
Epilepsy in developing nations
There are twice as many people with epilepsy in developing nations than industrialized countries. Unfortunately, over 60% of people in poorer nations do not receive proper medical care for epilepsy, researchers from the University of Oxford reported in the journal The Lancet.
The authors added that the burden of epilepsy in developing countries is “under-acknowledged by health agencies”, even though treatments for the disorder are very cost-effective.
Types of seizures
There are three types of diagnoses a doctor might make when treating a patient with epilepsy:
1. Idiopathic – this means there is no apparent cause.
2. Cryptogenic – this means the doctor thinks there is most probably a cause, but cannot pinpoint it.
3. Symptomatic – this means that the doctor knows what the cause is.
There are three descriptions of seizures, depending on what part of the brain the epileptic activity started:
Partial seizure
A partial seizure means the epileptic activity took place in just part of the patient’s brain. There are two types of partial seizure:
o Simple Partial Seizure – the patient is conscious during the seizure. In most cases the patient is also aware of his/her surroundings, even though the seizure is in progress.
o Complex Partial Seizure – the patient’s consciousness is impaired. The patient will generally not remember the seizure, and if he/she does, the recollection of it will be vague.
Generalized Seizure
A generalized seizure occurs when both halves of the brain have epileptic activity. The patient’s consciousness is lost while the seizure is in progress.
Secondary Generalized Seizure
A secondary generalized seizure occurs when the epileptic activity starts as a partial seizure, but then spreads to both halves of the brain. As this development happens, the patient loses consciousness.
Symptoms of epilepsy
The main symptoms of epilepsy are repeated seizures. There are some symptoms which may indicate a person has epilepsy. If one or more of these symptoms are present a medical exam is advised, especially if they recur:
• A convulsion with no temperature (no fever).
• Short spells of blackout, or confused memory.
• Intermittent fainting spells, during which bowel or bladder control is lost. This is frequently followed by extreme tiredness.
• For a short period the person is unresponsive to instructions or questions.
• The person becomes stiff, suddenly, for no obvious reason
• The person suddenly falls for no clear reason
• Sudden bouts of blinking without apparent stimuli
• Sudden bouts of chewing, without any apparent reason
• For a short time the person seems dazed, and unable to communicate
• Repetitive movements that seem inappropriate
• The person becomes fearful for no apparent reason, he/she may even panic or become angry
• Peculiar changes in senses, such as smell, touch and sound
• The arms, legs, or body jerk, in babies these will appear as cluster of rapid jerking movements.

The following conditions need to be eliminated as they may present similar symptoms, and are sometimes misdiagnosed as epilepsy:
• A high fever with epilepsy-like symptoms
• Fainting
• Narcolepsy (recurring episodes of sleep during the day and often disrupted nocturnal sleep)
• Cataplexy (a transient attack of extreme generalized weakness, often precipitated by an emotional response, such as surprise, fear, or anger; one component of the narcolepsy quadrad)
• Sleep disorders
• Nightmares
• Panic attacks
• Fugue states (a rare psychiatric disorder characterized by reversible amnesia for personal identity)
• Psychogenic seizures (a clinical episode that looks like an epileptic seizure, but is not due to epilepsy. The EEG is normal during an attack, and the behavior is often related to psychiatric disturbance, such as a conversion disorder)
• Breath-holding episodes (when a child responds to anger there may be vigorous crying and subsequent apnea and cyanosis – the child then stops breathing and skin color changes with loss of consciousness)
Prevention
While many cases are not preventable, efforts to reduce head injuries, provide good care around the time of birth, and reduce environmental parasites such as the pork tapeworm may be effective. Efforts in one part of Central America to decrease rates of pork tapeworm resulted in a 50% decrease in new cases of epilepsy.

Management
Epilepsy is usually treated with daily medication once a second seizure has occurred, but for those at high risk, medication may be started after the first seizure. In some cases, a special diet, the implantation of a neurostimulator, or neurosurgery may be required.
Recent developments on epilepsy treatment from MNT news
Brain stimulator reduces seizures in patients with drug-resistant epilepsy
In 2013, the Food and Drug Administration approved an implantable medical device to treat epilepsy. Now, doctors from the Rush Epilepsy Center in Illinois are the first to couple it with a novel electrode placement planning system, which is enabling the device to better reduce seizures.
The device, called the NeuroPace RNS System, works by using “on-demand” direct stimulation in order to find abnormal electrical activity in the brain and send small bits of electrical stimulation. The doctors from Rush Epilepsy Center explain, by doing this, the device suppresses seizures before they begin.
New pill developed to suppress epilepsy seizures
Within a decade, people with drug-resistant epilepsy may be able to take a pill to suppress seizures as required, in a similar way to how we take painkillers to relieve a headache.
Researchers from University College London (UCL) in the UK believe that the new “on demand” seizure suppressant pill they have developed may offer help to this 30% of epilepsy patients who do not respond successfully to AEDs.
Omega-3 fish oil ‘could reduce seizure frequency for epilepsy patients’
A new study claims epilepsy patients could reduce seizure frequency by consuming low doses of omega-3 fish oil every day. The research team at the University of California-Los Angeles (UCLA) School of Medicine, says their findings may be particularly useful to epilepsy patients who no longer respond to medication.
They publish their findings in the Journal of Neurology, Neurosurgery & Psychiatry.
Music could help treat epilepsy
Researchers are increasingly reporting the therapeutic potential of music. Now, a new study suggests it could be useful for treating epilepsy.
CONCLUSION
Epilepsy is a brain disorder that causes people to have recurring seizures. The seizures happen when clusters of nerve cells, or neurons, in the brain send out the wrong signals. People may have strange sensations and emotions or behave strangely. They may have violent muscle spasms or lose consciousness.
Epilepsy has many possible causes, including illness, brain injury, and abnormal brain development. In many cases, the cause is unknown.
Doctors use brain scans and other tests to diagnose epilepsy. It is important to start treatment right away. There is no cure for epilepsy, but medicines can control seizures for most people. When medicines are not working well, surgery or implanted devices such as vagus nerve stimulators may help. Special diets can help some children with epilepsy.

REFERENCES
• Chang BS, Lowenstein DH (2003). “Epilepsy”. N. Engl. J. Med. 349 (13): 1257–66. doi:10.1056/NEJMra022308. PMID 14507951.
• Fisher, Robert S; Acevedo, C; Arzimanoglou, A; Bogacz, A; Cross, JH; Elger, CE; Engel J, Jr; Forsgren, L; French, JA; Glynn, M; Hesdorffer, DC; Lee, BI; Mathern, GW; Moshé, SL; Perucca, E; Scheffer, IE; Tomson, T; Watanabe, M; Wiebe, S (April 2014). “ILAE Official Report: A practical clinical definition of epilepsy” (PDF). Epilepsia 55 (4): 475–82. doi:10.1111/epi.12550. PMID 24730690.
• “Epilepsy”. Fact Sheets. World Health Organization. October 2012. Retrieved January 24, 2013.
• Fisher R, van Emde Boas W, Blume W, Elger C, Genton P, Lee P, Engel J (2005). “Epileptic seizures and epilepsy: definitions proposed by the International League Against Epilepsy (ILAE) and the International Bureau for Epilepsy (IBE)”. Epilepsia 46 (4): 470–2. doi:10.1111/j.0013-9580.2005.66104.x. PMID 15816939.
• Longo, Dan L (2012). “369 Seizures and Epilepsy”. Harrison’s principles of internal medicine (18th ed.). McGraw-Hill. p. 3258. ISBN 978-0-07-174887-2.
• Eadie, MJ (December 2012). “Shortcomings in the current treatment of epilepsy.”. Expert Review of Neurotherapeutics 12 (12): 1419–27. doi:10.1586/ern.12.129. PMID 23237349.

A SYSTEMATIC REVIEW ON OSTEOARTHRITIS


A SYSTEMATIC REVIEW ON OSTEOARTHRITIS
INTRODUCTION
Osteoarthritis is the most common form of arthritis, affecting millions of people worldwide. It occurs when the protective cartilage on the ends of your bones wears down over time. Although osteoarthritis can damage any joint in your body, the disorder most commonly affects joints in your hands, knees, hips and spine. Osteoarthritis often gradually worsens, and no cure exists. But staying active, maintaining a healthy weight and other treatments may slow progression of the disease and help improve pain and joint function.
Sometimes called degenerative joint disease or degenerative arthritis, osteoarthritis (OA) is the most common chronic condition of the joints, affecting approximately 27 million Americans. OA can affect any joint, but it occurs most often in knees, hips, lower back and neck, small joints of the fingers and the bases of the thumb and big toe.
In normal joints, a firm, rubbery material called cartilage covers the end of each bone. Cartilage provides a smooth, gliding surface for joint motion and acts as a cushion between the bones. In OA, the cartilage breaks down, causing pain, swelling and problems moving the joint. As OA worsens over time, bones may break down and develop growths called spurs. Bits of bone or cartilage may chip off and float around in the joint. In the body, an inflammatory process occurs and cytokines (proteins) and enzymes develop that further damage the cartilage. In the final stages of OA, the cartilage wears away and bone rubs against bone leading to joint damage and more pain.
REVIEW ON OSTEOARTHRITIS
Osteoarthritis (OA) also known as degenerative arthritis, degenerative joint disease, or osteoarthrosis, is a type of joint disease that results from breakdown of joint cartilage and underlying bone. The most common symptoms are joint pain and stiffness. Initially, symptoms may occur only following exercise, but over time may become constant. Other symptoms may include joint swelling, decreased range of motion, and when the back is affected weakness or numbness of the arms and legs. The most commonly involved joints are those near the ends of the fingers, at the base of the thumb, neck, lower back, knees, and hips. Joints on one side of the body are often more affected than those on the other. Usually the problems come on over years. It can affect work and normal daily activities. Unlike other types of arthritis, only the joints are typically affected.
Causes include previous joint injury, abnormal joint or limb development, and inherited factors. Risk is greater in those who are overweight, have one leg of a different length, and have jobs that result in high levels of joint stress. Osteoarthritis is believed to be caused by mechanical stress on the joint and low grade inflammatory processes. It develops as cartilage is lost with eventually the underlying bone becoming affected. As pain may make it difficult to exercise, muscle loss may occur. Diagnosis is typically based on signs and symptoms with medical imaging and other tests occasionally used to either support or rule out other problems. Unlike in rheumatoid arthritis, which is primarily an inflammatory condition, the joints do not typically become hot or red.
Treatment includes exercise, efforts to decrease joint stress, support groups, and pain medications. Efforts to decrease joint stress include resting, the use of a cane, and braces. Weight loss may help in those who are overweight. Pain medications may include paracetamol (acetaminophen). If this does not work NSAIDs such as naproxen may be used but these medications are associated with greater side effects. Opioids if used are generally only recommended short term due to the risk of addiction. If pain interferes with normal life despite other treatments, joint replacement surgery may help. An artificial joint, however, only lasts a limited amount of time. Outcomes for most people with osteoarthritis are good.
OA is the most common form of arthritis with disease of the knee and hip affecting about 3.8% of people as of 2010. Among those over 60 years old about 10% of males and 18% of females are affected. It is the cause of about 2% of years lived with disability. In Australia about 1.9 million people are affected, and in the United States about 27 million people are affected. Before 45 years of age it is more common in men, while after 45 years of age it is more common in women. It becomes more common in both sexes as people become older
Signs and symptoms

Osteoarthritis most often occurs in the hands (at the ends of the fingers and thumbs), neck, lower back, knees, and hips
The main symptom is pain, causing loss of ability and often stiffness. “Pain” is generally described as a sharp ache or a burning sensation in the associated muscles and tendons. OA can cause a crackling noise (called “crepitus”) when the affected joint is moved or touched and people may experience muscle spasms and contractions in the tendons. Occasionally, the joints may also be filled with fluid. Some people report increased pain associated with cold temperature, high humidity, and/or a drop in barometric pressure, but studies have had mixed results.
OA commonly affects the hands, feet, spine, and the large weight bearing joints, such as the hips and knees, although in theory, any joint in the body can be affected. As OA progresses, the affected joints appear larger, are stiff and painful, and usually feel better with gentle use but worse with excessive or prolonged use, thus distinguishing it from rheumatoid arthritis.
In smaller joints, such as at the fingers, hard bony enlargements, called Heberden’s nodes (on the distal interphalangeal joints) and/or Bouchard’s nodes (on the proximal interphalangeal joints), may form, and though they are not necessarily painful, they do limit the movement of the fingers significantly. OA at the toes leads to the formation of bunions, rendering them red or swollen. Some people notice these physical changes before they experience any pain.
OA is the most common cause of a joint effusion of the knee.
Pathophysiology

Normal hip joint

Hip joint with osteoarthritis
While OA is a degenerative joint disease that may cause gross cartilage loss and morphological damage to other joint tissues, more subtle biochemical changes occur in the earliest stages of OA progression. The water content of healthy cartilage is finely balanced by compressive force driving water out & swelling pressure drawing water in. Collagen fibres exert the compressive force, whereas the Gibbs–Donnan effect & cartilage proteoglycans create osmotic pressure which tends to draw water in.
However, during onset of OA, the collagen matrix becomes more disorganized and there is a decrease in proteoglycan content within cartilage. The breakdown of collagen fibers results in a net increase in water content. This increase occurs because whilst there is an overall loss of proteoglycans (and thus a decreased osmotic pull), it is outweighed by a loss of collagen. Without the protective effects of the proteoglycans, the collagen fibers of the cartilage can become susceptible to degradation and thus exacerbate the degeneration. Inflammation of the synovium (joint cavity lining) and the surrounding joint capsule can also occur, though often mild (compared to what occurs in rheumatoid arthritis). This can happen as breakdown products from the cartilage are released into the synovial space, and the cells lining the joint attempt to remove them.
Other structures within the joint can also be affected. The ligaments within the joint become thickened and fibrotic and the menisci can become damaged and wear away. Menisci can be completely absent by the time a person undergoes a joint replacement. New bone outgrowths, called “spurs” or osteophytes, can form on the margins of the joints, possibly in an attempt to improve the congruence of the articular cartilage surfaces in the absence of the menisci. The subchondral bone volume increases and becomes less mineralized (hypomineralization). All these changes can cause problems functioning. The pain in an osteoarthritic joint has been related to thickened synovium and subchondral bone lesions.
Diagnosis
Diagnosis is made with reasonable certainty based on history and clinical examination. X-rays may confirm the diagnosis. The typical changes seen on X-ray include: joint space narrowing, subchondral sclerosis (increased bone formation around the joint), subchondral cyst formation, and osteophytes. Plain films may not correlate with the findings on physical examination or with the degree of pain. Usually other imaging techniques are not necessary to clinically diagnose OA.
In 1990, the American College of Rheumatology, using data from a multi-center study, developed a set of criteria for the diagnosis of hand OA based on hard tissue enlargement and swelling of certain joints.[41] These criteria were found to be 92% sensitive and 98% specific for hand OA versus other entities such as rheumatoid arthritis and spondyloarthropathies.

CONCLUSION
Osteoarthritis is the most common form of arthritis. It causes pain, swelling, and reduced motion in your joints. It can occur in any joint, but usually it affects your hands, knees, hips or spine.
Osteoarthritis breaks down the cartilage in your joints. Cartilage is the slippery tissue that covers the ends of bones in a joint. Healthy cartilage absorbs the shock of movement. When you lose cartilage, your bones rub together. Over time, this rubbing can permanently damage the joint.
Risk factors for osteoarthritis include
• Being overweight
• Getting older
• Injuring a joint
No single test can diagnose osteoarthritis. Most doctors use several methods, including medical history, a physical exam, x-rays, or lab tests. Treatments include exercise, medicines, and sometimes surgery.

REFERENCES
Atlas of Osteoarthritis. Springer. 2015. p. 21.
ISBN 9781910315163. “Osteoarthritis”. National Institute of Arthritis and Musculoskeletal and Skin Diseases. April 2015.
Retrieved 13 May 2015.
Glyn-Jones, S; Palmer, AJ; Agricola, R; Price, AJ; Vincent, TL;
Weinans, H; Carr, AJ (3 March 2015). “Osteoarthritis.”. Lancet. PMID 25748615.
Berenbaum F (2013). “Osteoarthritis as an inflammatory disease
(osteoarthritis is not osteoarthrosis!)”. Osteoarthritis and Cartilage 21 (1): 16–21. doi:10.1016/j.joca.2012.11.012. PMID 23194896. Conaghan P (2014). “Osteoarthritis — Care and management in adults” (PDF).
March, L; Smith, EU; Hoy, DG; Cross, MJ; Sanchez-Riera, L; Blyth,
F; Buchbinder, R; Vos, T; Woolf, AD (June 2014). “Burden of disability due to musculoskeletal (MSK) disorders.”. Best practice & research. Clinical rheumatology 28 (3): 353–66. PMID 25481420.
Elsternwick (2013). “A problem worth solving.”. Arthritis and
Osteoporosis Victoria.
MedlinePlus Encyclopedia Osteoarthritis

A SYSTEMATIC REVIEW ON THE EPIDEMIOLOGY OF CHOLERA


A SYSTEMATIC REVIEW ON THE EPIDEMIOLOGY OF CHOLERA
INTRODUCTION
Cholera is an infection of the small intestine by some strains of the bacterium Vibrio cholerae. Symptoms may range from none, to mild, to severe. The classic symptom is large amounts of watery diarrhea that lasts a few days. Vomiting and muscle cramps may also occur. Diarrhea can be so severe that it leads within hours to severe dehydration and electrolyte imbalance. This may result in sunken eyes, cold skin, decreased skin elasticity, and wrinkling of the hands and feet. The dehydration may result in the skin turning bluish. Symptoms start two hours to five days after exposure.
Cholera is caused by a number of types of Vibrio cholerae, with some types producing more severe disease than others. It is spread mostly by water and food that has been contaminated with human feces containing the bacteria. Insufficiently cooked seafood is a common source. Humans are the only animal affected. Risk factors for the disease include poor sanitation, not enough clean drinking water, and poverty. There are concerns that rising sea levels will increase rates of disease. Cholera can be diagnosed by a stool test. A rapid dipstick test is available but is not as accurate
REVIEW ON THE EPIDEMIOLOGY OF CHOLERA
Cholera affects an estimated 3–5 million people worldwide and causes 58,000–130,000 deaths a year as of 2010. While it is currently classified as a pandemic, it is rare in the developed world. Children are mostly affected. Cholera occurs as both outbreaks and chronically in certain areas. Areas with an ongoing risk of disease include Africa and south-east Asia. While the risk of death among those affected is usually less than 5%, it may be as high as 50% among some groups who do not have access to treatment. Historical descriptions of cholera are found as early as the 5th century BC in Sanskrit. The study of cholera by John Snow between 1849 and 1854 led to significant advances in the field of epidemiology.
Cholera is an acute infection of the small intestine that is a particular problem in developing countries where access to clean drinking water and hygiene measures are poor. The disease causes severe diarrhea and vomiting leading to dehydration. Children and the elderly are at particular risk of rapidly developing and succumbing to the dehydration caused by cholera.
Over the last century, the number of cholera cases and deaths due to cholera have steadily declined, mainly due to improvements in sanitation and water hygiene. In England for example, no cases of cholera have originated in the country since 1893 and those that have been reported have been caught abroad.
Some of the regions where cholera is still a major health threat include:
• Sub-Saharan Africa or the countries south of the Sahara desert
• Some parts of the Middle East
• South and south-east Asia including India and Bangladesh
• Some parts of South America
In these regions, cholera is not a regular occurrence but may sometimes occur as outbreaks, especially during the summer season, natural disasters, wars or civil disorders. The outbreaks are almost always due to overcrowding of people living in poor conditions and with a lack of access to clean drinking water.
Cholera was first seen to spread as a pandemic to different parts of the world from the Indian subcontinent in 1817. The current pandemic originated in Sulawesi, Indonesia in 1961 and was caused by the El Tor biotype of the Vibrio cholerae serotype O1. It began to spread rapidly to other countries in Asia, Europe and Africa and even to Latin America in 1991.
Following this was the identification of a new strain called Vibrio cholerae O139 Bengal that caused outbreaks in India and Bangladesh in 1992. This strain is still confined to Asian countries.
Cholera Causes
Vibrio cholerae, the bacterium that causes cholera, is usually found in food or water contaminated by feces from a person with the infection. Common sources include:
• Municipal water supplies
• Ice made from municipal water
• Foods and drinks sold by street vendors
• Vegetables grown with water containing human wastes
• Raw or undercooked fish and seafood caught in waters polluted with sewage
When a person consumes the contaminated food or water, the bacteria release a toxin in the intestines that produces severe diarrhea.
It is not likely you will catch cholera just from casual contact with an infected person.
Transmission
Cholera has been found in two animal populations: shellfish and plankton. Cholera is typically transmitted to humans by either contaminated food or water. Most cholera cases in developed countries are a result of transmission by food, while in the developing world it is more often water. Food transmission occurs when people harvest seafood such as oysters in waters infected with sewage, as Vibrio cholerae accumulates in zooplankton and the oysters eat the zooplankton.
People infected with cholera often have diarrhea, and disease transmission may occur if this highly liquid stool, colloquially referred to as “rice-water”, contaminates water used by others.[19] The source of the contamination is typically other cholera sufferers when their untreated diarrheal discharge is allowed to get into waterways, groundwater or drinking water supplies. Drinking any infected water and eating any foods washed in the water, as well as shellfish living in the affected waterway, can cause a person to contract an infection. Cholera is rarely spread directly from person to person. Both toxic and nontoxic strains exist. Nontoxic strains can acquire toxicity through a temperate bacteriophage. Coastal cholera outbreaks typically follow zooplankton blooms, thus making cholera a zoonotic disease.
Cholera Symptoms
Symptoms of cholera can begin as soon as a few hours or as long as five days after infection. Often, symptoms are mild. But sometimes they are very serious. About one in 20 people infected have severe watery diarrhea accompanied by vomiting, which can quickly lead to dehydration. Although many infected people may have minimal or no symptoms, they can still contribute to spread of the infection.
Signs and symptoms of dehydration include:
• Rapid heart rate
• Loss of skin elasticity (the ability to return to original position quickly if pinched)
• Dry mucous membranes, including the inside of the mouth, throat, nose, and eyelids
• Low blood pressure
• Thirst
• Muscle cramps
If not treated, dehydration can lead to shock and death in a matter of hours.
Diagnosis
A rapid dipstick test is available to determine the presence of V. cholerae. In those samples that test positive, further testing should be done to determine antibiotic resistance. In epidemic situations, a clinical diagnosis may be made by taking a patient history and doing a brief examination. Treatment is usually started without or before confirmation by laboratory analysis.
Stool and swab samples collected in the acute stage of the disease, before antibiotics have been administered, are the most useful specimens for laboratory diagnosis. If an epidemic of cholera is suspected, the most common causative agent is V. cholerae O1. If V. cholerae serogroup O1 is not isolated, the laboratory should test for V. cholerae O139. However, if neither of these organisms is isolated, it is necessary to send stool specimens to a reference laboratory.
Infection with V. cholerae O139 should be reported and handled in the same manner as that caused by V. cholerae O1. The associated diarrheal illness should be referred to as cholera and must be reported in the United States

Prevention
The World Health Organization recommends focusing on prevention, preparedness, and response to combat the spread of cholera. They also stress the importance of an effective surveillance system. Governments can play a role in all of these areas, and in preventing cholera or indirectly facilitating its spread.
Although cholera may be life-threatening, prevention of the disease is normally straightforward if proper sanitation practices are followed. In developed countries, due to nearly universal advanced water treatment and sanitation practices, cholera is no longer a major health threat. The last major outbreak of cholera in the United States occurred in 1910–1911. Effective sanitation practices, if instituted and adhered to in time, are usually sufficient to stop an epidemic.
Prevention involves improved sanitation and access to clean water. Cholera vaccines that are given by mouth provide reasonable protection for about six months. They have the added benefit of protecting against another type of diarrhea caused by E. coli. The primary treatment is oral rehydration therapy—the replacement of fluids with slightly sweet and salty solutions. Rice-based solutions are preferred. Zinc supplementation is useful in children. In severe cases, intravenous fluids, such as Ringer’s lactate, may be required, and antibiotics may be beneficial. Testing to see what antibiotic the cholera is susceptible to can help guide the choice.
CONCLUSION
Cholera is an infectious disease that causes severe watery diarrhea, which can lead to dehydration and even death if untreated. It is caused by eating food or drinking water contaminated with a bacterium called Vibrio cholerae.
Cholera was prevalent in the U.S. in the 1800s, before modern water and sewage treatment systems eliminated its spread by contaminated water. Only about 10 cases of cholera are reported each year in the U.S. and half of these are acquired abroad. Rarely, contaminated seafood has caused cholera outbreaks in the U.S. However, cholera outbreaks are still a serious problem in other parts of the world. At least 150,000 cases are reported to the World Health Organization each year. The disease is most common in places with poor sanitation, crowding, war, and famine. Common locations include parts of Africa, south Asia, and Latin America. If you are traveling to one of those areas, knowing the following cholera facts can help protect you and your family.

REFERENCES

Finkelstein, Richard. “Medical microbiology”.
http://www.ncbi.nlm.nih.gov/books/NBK8407/. “Cholera – Vibrio cholerae infection Information for Public Health & Medical Professionals”. cdc.gov. January 6, 2015. Retrieved 17 March 2015.

“Cholera vaccines: WHO position paper.” (PDF). Weekly epidemiological record 13 (85): 117–128. Mar 26, 2010. PMID 20349546.
Harris, JB; LaRocque, RC; Qadri, F; Ryan, ET; Calderwood, SB (30 June 2012). “Cholera.”. Lancet 379 (9835): 2466–76. PMID 22748592.
Bailey, Diane (2011). Cholera (1st ed.). New York: Rosen Pub. p. 7. ISBN 9781435894372.
“Sources of Infection & Risk Factors”. cdc.gov. November 7, 2014. Retrieved 17 March 2015.
“Diagnosis and Detection”. cdc.gov. February 10, 2015. Retrieved 17 March 2015.
“Cholera – Vibrio cholerae infection Treatment”. cdc.gov. November 7, 2014. Retrieved 17 March 2015.

DESCRIBE THE PREVENTION AND CONTROL MEASURES OF TUBERCULOSIS IN DEVELOPING COUNTRIES


DESCRIBE THE PREVENTION AND CONTROL MEASURES OF TUBERCULOSIS IN DEVELOPING COUNTRIES
INTRODUCTION
A developing country, also called a less developed country or underdeveloped country, is a nation with an underdeveloped industrial base, and a low Human Development Index (HDI) relative to other countries, thus Tuberculosis, MTB, or TB (short for tubercle bacillus), in the past also called phthisis, phthisis pulmonalis, or consumption, is a widespread, infectious disease caused by various strains of mycobacteria, usually Mycobacterium tuberculosis. Tuberculosis typically attacks the lungs, but can also affect other parts of the body. It is spread through the air when people who have an active TB infection cough, sneeze, or otherwise transmit respiratory fluids through the air. Most infections do not have symptoms, known as latent tuberculosis. About one in ten latent infections eventually progresses to active disease which, if left untreated, kills more than 50% of those so infected.
The classic symptoms of active TB infection are a chronic cough with blood-tinged sputum, fever, night sweats, and weight loss (the last of these giving rise to the formerly common term for the disease, “consumption”). Infection of other organs causes a wide range of symptoms. Diagnosis of active TB relies on radiology (commonly chest X-rays), as well as microscopic examination and microbiological culture of body fluids. Diagnosis of latent TB relies on the tuberculin skin test (TST) and/or blood tests. Treatment is difficult and requires administration of multiple antibiotics over a long period of time. Household, workplace and social contacts are also screened and treated if necessary. Antibiotic resistance is a growing problem in multiple drug-resistant tuberculosis (MDR-TB) infections. Prevention relies on early detection and treatment of cases and on screening programs and vaccination with the bacillus Calmette-Guérin vaccine.
PREVENTION AND CONTROL MEASURES OF TUBERCULOSIS IN DEVELOPING COUNTRIES
The principles of diagnosis and treatment of TB disease discussed in this section are guidelines and not meant to substitute for clinical experience and judgment. Medical providers not familiar with the management of TB disease should consult a person with expertise. All facilities’ local operations procedures should include plans for consultation with and referral to persons with expertise in TB and should include criteria delineating when consultation and referral are indicated.
Although the index of suspicion for TB disease varies by individual risk factors and prevalence of TB in the population served by the correctional facility, correctional facilities typically are considered higher-risk settings. A diagnosis of TB disease should be considered for any patient who has a persistent cough (i.e., one lasting >3 weeks) or other signs or symptoms compatible with TB disease (e.g., hemoptysis, night sweats, weight loss, anorexia, and fever). Diagnostic tests for TB include the TST, QFT-G, chest radiography, and laboratory examination of sputum samples or other body tissues and fluids.
Persons exposed to inmates with TB disease might become latently infected with M. tuberculosis depending on host immunity and the degree and duration of exposure. Therefore, the treatment of persons with TB disease plays a key role in TB control by stopping transmission and preventing potentially infectious cases from occurring (92). LTBI is an asymptomatic condition that can be diagnosed by the TST or QFT-G.
Diagnosis of tuberculosis
To check for TB, a health care provider will use a stethoscope to listen to the lungs and will check for swelling in the lymph nodes. They will also ask about symptoms and medical history as well as assessing a person’s risk of exposure to TB.
The most common diagnostic test for TB is a skin test where a small injection of PPD tuberculin, an extract of the TB bacterium, is made just below the inside forearm.
The injection site should be checked after 2-3 days, and if a hard, red bump has swollen up then it is likely that TB is present.

TB is most commonly diagnosed via a skin test involving an injection into the forearm.
Unfortunately, the skin test is not 100% accurate and has been known to give incorrect positive and negative readings.
However, there are other tests that are available to diagnose TB. Blood tests, chest X-rays and sputum tests can all be used to test for the presence of TB bacteria, and may be used alongside a skin test. MDR-TB is more difficult to diagnose than regular TB. It is also difficult to diagnose regular TB in children.
Treatments for tuberculosis
The majority of TB cases can be cured when the right medication is available and administered correctly.
The precise type and length of antibiotic treatment depends on a person’s age, overall health, potential resistance to drugs, whether the TB is latent or active, and the location of infection (i.e. the lungs, brain, kidneys).
People with latent TB may need just one kind of TB antibiotics, whereas people with active TB (particularly MDR-TB) will often require a prescription of multiple drugs.
Antibiotics are usually required to be taken for a relatively long time. The standard length of time for a course of TB antibiotics is about 6 months.
All TB medication is toxic to the liver, and although side effects are uncommon, when they do occur, they can be quite serious. Potential side effects should be reported to a health care provider and include:
• Dark urine
• Fever
• Jaundice
• Loss of appetite
• Nausea and vomiting.
It is important for any course of treatment to be completed fully, even if the TB symptoms have gone away. Any bacteria that have survived the treatment could become resistant to the medication that has been prescribed, and could lead to developing MDR-TB in the future.
Directly observed therapy (DOT) can be recommended. It involves a health care worker administering the TB medication to ensure that the course of treatment is completed.

Prevention of tuberculosis

If you have active TB, a face mask can help lower the risk of the disease spreading to other people.

A few general measures can be taken to prevent the spread of active TB. Avoiding other people by not going to school or work, or sleeping in the same room as someone, will help to minimize the risk of germs from reaching anyone else. Wearing a mask, covering the mouth and ventilating rooms can also limit the spread of bacteria.
In some countries, BCG injections are given to children in order to vaccinate them against tuberculosis. It is not recommended for general use in the US because it is not effective in adults, and it can adversely influence the results of skin testing diagnoses.
The most important thing to do is to finish entire courses of medication when they are prescribed. MDR-TB bacteria are far deadlier than regular TB bacteria. Some cases of MDR-TB require extensive courses of chemotherapy, which can be expensive and cause severe adverse drug reactions in patients.

CONCLUSION
Tuberculosis (TB) is a disease caused by bacteria called Mycobacterium tuberculosis. The bacteria usually attack the lungs, but they can also damage other parts of the body.
TB spreads through the air when a person with TB of the lungs or throat coughs, sneezes, or talks. If you have been exposed, you should go to your doctor for tests. You are more likely to get TB if you have a weak immune system.
Symptoms of TB in the lungs may include
• A bad cough that lasts 3 weeks or longer
• Weight loss
• Loss of appetite
• Coughing up blood or mucus
• Weakness or fatigue
• Fever
• Night sweats
Skin tests, blood tests, x-rays, and other tests can tell if you have TB. If not treated properly, TB can be deadly. You can usually cure active TB by taking several medicines for a long period of time.
One-third of the world’s population is thought to have been infected with M. tuberculosis, and new infections occur in about 1% of the population each year. In 2007, an estimated 13.7 million chronic cases were active globally, while in 2013, an estimated 9 million new cases occurred. In 2013 there were between 1.3 and 1.5 million associated deaths, most of which occurred in developing countries.The total number of tuberculosis cases has been decreasing since 2006, and new cases have decreased since 2002. The rate of tuberculosis in different areas varies across the globe; about 80% of the population in many Asian and African countries tests positive in tuberculin tests, while only 5–10% of the United States population tests positive. More people in the developing world contract tuberculosis because of a poor immune system, largely due to high rates of HIV infection and the corresponding development of AIDS.

REFERENCES
1. Elzinga G, Raviglione MC, Maher D. Scale up: meeting targets in global tuberculosis control. Lancet 2004;363:814–9.
2. MacNeil J, Lobato MN, Moore M. An unanswered health disparity: tuberculosis among correctional inmates, 1993 through 2003. Am J Public Health 2005;95:1800–5.
3. CDC. Prevention and control of tuberculosis in correctional facilities: recommendations of the Advisory Council for the Elimination of Tuberculosis. MMWR 1996;45(No. RR-8):1–27.
4. Bureau of Justice Statistics. Adult correctional populations, 1980–2004. Washington, DC: US Department of Justice, Office of Justice Programs; 2005. Available at http://www.ojp.usdoj.gov/bjs/glance/corr2.htm.
5. US Department of Justice. Prison and jail inmates at midyear 2003. Bureau of Justice Statistics Bulletin; 2004. NCJ 203947.
6. CDC. Reported tuberculosis in the United States, 2003. Atlanta, GA: US Department of Health and Human Services, CDC; 2004.
7. CDC. Probable transmission of multidrug-resistant tuberculosis in a correctional facility—California. MMWR 1993;42:48–51.
8. Braun MM, Truman BI, Maguire B, et al. Increasing incidence of tuberculosis in a prison inmate population: association with HIV infection. JAMA 1989;261:393–7.

DISCUSS THE CHAIN OF INFECTION AND THE DISEASE CRITERIA OF DISEASE CAUSATION


DISCUSS THE CHAIN OF INFECTION AND THE DISEASE CRITERIA OF DISEASE CAUSATION
INTRODUCTION
The chain of infection, if we think of it as an actual chain, is made up of six different links: pathogen (infectious agent), reservoir, portal of exit, means of transmission, portal of entry, and the new host. Each link has a unique role in the chain, and each can be interrupted, or broken, through various means. While disease criteria simply define the various ways and means by which disease are cause and spread in the environment.
CHAIN OF INFECTION AND THE DISEASE CRITERIA OF DISEASE CAUSATION
More specifically, transmission occurs when the agent leaves its reservoir or host through a portal of exit, is conveyed by some mode of transmission, and enters through an appropriate portal of entry to infect a susceptible host. This sequence is sometimes called the chain of infection. The Six Links

The first link is the pathogen itself. This is the disease-causing organism. For many illnesses and diseases this is a virus or bacterium. In order to break this link, various methods can be used, including the pasteurization of milk, the chlorination of drinking water, or the use of disinfectants.
The second link is the reservoir. This is the natural environment that the pathogen requires for survival. Reservoirs can be a person, an animal, or an environmental component, such as soil or water. This link can be broken through medical treatment and testing, insect and rodent eradication, or quarantine.
The third link is the portal of exit. This link is needed for the pathogen to leave the reservoir. If the reservoir is a human, then the portal of exit may be saliva, mucous membranes, feces, blood, or nose or throat discharges. By using barrier methods, such as condoms or masks, or covering the mouth while coughing, this link can be broken.
The fourth link is the means of transmission. The pathogen can be transmitted either directly or indirectly. Direct transmission requires close association with the infected host but not necessarily physical contact. Indirect transmission requires a vector, such as an animal or insect. The link can be broken through hand washing, safe sex practices, or avoiding contact with infected individuals.
Link number five is the portal of entry. Entry of the pathogen can take place in one of three ways: penetration, inhalation, or ingestion. The level and severity of an infection may depend on the depth of penetration. Similar to the portal of exit, barrier methods, such as condoms or masks, can be used to break this link along with other methods, such as insect repellants.
The final link is the new host. Once in the new host, various factors influence the severity of infection, including the strength of the immune system and the reproductive rate of the pathogen. Immunization, health promotion, and medical treatment can be used to break this link in the chain.
Example of a Chain of Infection
An example of illness resulting from the chain of infection is the common cold. In this case, the pathogen is often referred to as rhinovirus. The reservoir is another person carrying this virus, who then propels the virus into the air via a portal of exit, such as a cough or sneeze. The route of transmission is direct to the new host, which takes place through inhalation (the portal of entry) of the virus.
Chain Of Infection – Infection Prevention And Control
Certain conditions must be met in order for a microbe or infectious disease to be spread from person to person. This process, called the chain of infection, can only occur when all six links in the chain are intact.
Infection Control principles are aimed at breaking one or more links in this chain.
• Caustive Agent – the microorganism (for example bacteria, virus or fungi).
• Reservoir (source) – a host which allows the microorganism to live, and possibly grow, and multiply. Humans, animals and the environment can all be reservoirs for microorganisms.
• Portal of Exit – a path for the microorganism to escape from the host. The blood, respiratory tract, skin and mucous membranes, genitourinary tract, gastrointestinal tract, and transplacental route from mother to her unborn infant are some examples.
• Mode of Transmission – since microorganisms cannot travel on their own; they require a vehicle to carry them to other people and places.
• Portal of Entry – a path for the microorganism to get into a new host, similar to the portal of exit.
• Susceptible Host – a person susceptible to the microorganism.
DISEASE CRITERIA OF DISEASE CAUSATION
A critical premise of epidemiology is that disease and other health events do not occur randomly in a population, but are more likely to occur in some members of the population than others because of risk factors that may not be distributed randomly in the population. As noted earlier, one important use of epidemiology is to identify the factors that place some members at greater risk than others.
Causation
A number of models of disease causation have been proposed. Among the simplest of these is the epidemiologic triad or triangle, the traditional model for infectious disease. The triad consists of an external agent, a susceptible host, and an environment that brings the host and agent together. In this model, disease results from the interaction between the agent and the susceptible host in an environment that supports transmission of the agent from a source to that host. Agent, host, and environmental factors interrelate in a variety of complex ways to produce disease. Different diseases require different balances and interactions of these three components. Development of appropriate, practical, and effective public health measures to control or prevent disease usually requires assessment of all three components and their interactions.
Figure 1.16 Epidemiologic Triad

Image Description
Agent originally referred to an infectious microorganism or pathogen: a virus, bacterium, parasite, or other microbe. Generally, the agent must be present for disease to occur; however, presence of that agent alone is not always sufficient to cause disease. A variety of factors influence whether exposure to an organism will result in disease, including the organism’s pathogenicity (ability to cause disease) and dose.
Over time, the concept of agent has been broadened to include chemical and physical causes of disease or injury. These include chemical contaminants (such as the L-tryptophan contaminant responsible for eosinophilia-myalgia syndrome), as well as physical forces (such as repetitive mechanical forces associated with carpal tunnel syndrome). While the epidemiologic triad serves as a useful model for many diseases, it has proven inadequate for cardiovascular disease, cancer, and other diseases that appear to have multiple contributing causes without a single necessary one.
Host refers to the human who can get the disease. A variety of factors intrinsic to the host, sometimes called risk factors, can influence an individual’s exposure, susceptibility, or response to a causative agent. Opportunities for exposure are often influenced by behaviors such as sexual practices, hygiene, and other personal choices as well as by age and sex. Susceptibility and response to an agent are influenced by factors such as genetic composition, nutritional and immunologic status, anatomic structure, presence of disease or medications, and psychological makeup.
Environment refers to extrinsic factors that affect the agent and the opportunity for exposure. Environmental factors include physical factors such as geology and climate, biologic factors such as insects that transmit the agent, and socioeconomic factors such as crowding, sanitation, and the availability of health services.
Component causes and causal pies
Because the agent-host-environment model did not work well for many non-infectious diseases, several other models that attempt to account for the multifactorial nature of causation have been proposed. One such model was proposed by Rothman in 1976, and has come to be known as the Causal Pies. An individual factor that contributes to cause disease is shown as a piece of a pie. After all the pieces of a pie fall into place, the pie is complete — and disease occurs. The individual factors are called component causes. The complete pie, which might be considered a causal pathway, is called a sufficient cause. A disease may have more than one sufficient cause, with each sufficient cause being composed of several component causes that may or may not overlap. A component that appears in every pie or pathway is called a necessary cause, because without it, disease does not occur. Note in Figure 1.17 that component cause A is a necessary cause because it appears in every pie.

Rothman’s Causal Pies

Image Description
Source: Rothman KJ. Causes. Am J Epidemiol 1976;104:587–592.
The component causes may include intrinsic host factors as well as the agent and the environmental factors of the agent-host-environment triad. A single component cause is rarely a sufficient cause by itself. For example, even exposure to a highly infectious agent such as measles virus does not invariably result in measles disease. Host susceptibility and other host factors also may play a role.
At the other extreme, an agent that is usually harmless in healthy persons may cause devastating disease under different conditions. Pneumocystis carinii is an organism that harmlessly colonizes the respiratory tract of some healthy persons, but can cause potentially lethal pneumonia in persons whose immune systems have been weakened by human immunodeficiency virus (HIV). Presence of Pneumocystis carinii organisms is therefore a necessary but not sufficient cause of pneumocystis pneumonia. In Figure 1.17, it would be represented by component cause A.
As the model indicates, a particular disease may result from a variety of different sufficient causes or pathways. For example, lung cancer may result from a sufficient cause that includes smoking as a component cause. Smoking is not a sufficient cause by itself, however, because not all smokers develop lung cancer. Neither is smoking a necessary cause, because a small fraction of lung cancer victims have never smoked. Suppose Component Cause B is smoking and Component Cause C is asbestos. Sufficient Cause I includes both smoking (B) and asbestos (C). Sufficient Cause II includes smoking without asbestos, and Sufficient Cause III includes asbestos without smoking. But because lung cancer can develop in persons who have never been exposed to either smoking or asbestos, a proper model for lung cancer would have to show at least one more Sufficient Cause Pie that does not include either component B or component C.
Note that public health action does not depend on the identification of every component cause. Disease prevention can be accomplished by blocking any single component of a sufficient cause, at least through that pathway. For example, elimination of smoking (component B) would prevent lung cancer from sufficient causes I and II, although some lung cancer would still occur through sufficient cause III.

CONCLUSION
Unlike infectious diseases of the past, diseases prevalent in modern industrialized societies have multifactorial origins whose complexity so far has defied an integrated scientific understanding. Their epidemiologic investigation suffers from the conceptual inability of formulating plausible causal hypotheses that mimic a complex reality, and from the practical difficulties of running elaborate studies controlled for multifactorial confounders. Until biomedical research provides a satisfactory understanding of the complex mechanistic determinants of such diseases, epidemiology can only field reductionist causal hypotheses, leading to results of uncertain significance. Consensual but rationally weak criteria devised to extract inferences of causality from such results confirm the generic inadequacy of epidemiology in this area, and are unable to provide definitive scientific support to the perceived mandate for public health action

REFERENCES
Hill, Austin Bradford (1965). “The Environment and Disease: Association or Causation?”. Proceedings of the Royal Society of Medicine 58 (5): 295–300. PMC 1898525. PMID 14283879.
Höfler M (2005). “The Bradford Hill considerations on causality: a counterfactual perspective?”. Emerging themes in epidemiology 2 (1): 11. doi:10.1186/1742-7622-2-11. PMC 1291382. PMID 16269083.
Howick J, Glasziou P, Aronson JK (2009). “The evolution of evidence hierarchies: what can Bradford Hill’s ‘guidelines for causation’ contribute?”. Journal of the Royal Society of Medicine 102 (5): 186–94. doi:10.1258/jrsm.2009.090020. PMC 2677430. PMID 19417051.
Glass TA, Goodman SN, Hernán MA, Samet JM (2013). “Causal inference in public health”. Annu Rev Public Health 34: 61–75. doi:10.1146/annurev-publhealth-031811-124606. PMC 4079266. PMID 23297653.
Potischman N, Weed DL (1999). “Causal criteria in nutritional epidemiology”. Am J Clin Nutr 69 (6): 1309S–1314S. PMID 10359231.
Rothman KJ, Greenland S (2005). “Causation and causal inference in epidemiology”. Am J Public Health 95 (Suppl 1): S144–50. doi:10.2105/AJPH.2004.059204. PMID 16030331.
Phillips, CV; Goodman KJ (2006). “Causal criteria and counterfactuals; nothing more (or less) than scientific common sense?”. Emerging themes in epidemiology 3 (1): 5. doi:10.1186/1742-7622-3-5. PMC 1488839. PMID 16725053.

DISCUSS THE EPIDEMIOLOGY OF HYPERSION IN NIGERIA


DISCUSS THE EPIDEMIOLOGY OF HYPERSION IN NIGERIA
INTRODUCTION
The number of people living with hypertension (high blood pressure) is predicted to be 1.56 billion worldwide by the year 2025. In Nigeria alone, about a third of all people over the age of 20 years have hypertension, as measured by high blood pressure and taking antihypertensive medications in 2011-2012.
Control of hypertension has become a key national priority in the US as part of the Million Hearts initiative from the Department of Health and Human Services, which aims to prevent 1 million heart attacks and strokes in the US by 2017.3
An increasing prevalence of the condition is blamed on lifestyle factors, such as physical inactivity, a salt-rich diet created by processed and fatty foods, and alcohol and tobacco use.
hypertension, is a chronic medical condition in which the blood pressure in the arteries is persistently elevated. Blood pressure is expressed by two measurements, the systolic and diastolic pressures, which are the maximum and minimum pressures, respectively, in the arterial system. The systolic pressure occurs when the left ventricle is most contracted; the diastolic pressure occurs when the left ventricle is most relaxed prior to the next contraction. Normal blood pressure at rest is within the range of 100–140 millimeters mercury (mmHg) systolic and 60–90 mmHg diastolic. Hypertension is present if the blood pressure is persistently at or above 140/90 mmHg for most adults; different numbers apply to children.
Hypertension usually does not cause symptoms initially, but sustained hypertension over time is a major risk factor for hypertensive heart disease, coronary artery disease,[2] stroke, aortic aneurysm, peripheral artery disease, and chronic kidney disease.
EPIDEMIOLOGY OF HYPERSION IN NIGERIA
Hypertension is having a blood pressure higher than 140 over 90 mmHg, a definition shared by all the medical guidelines. This means the systolic reading (the pressure as the heart pumps blood around the body) is over 140 mmHg (millimeters of mercury) or the diastolic reading (as the heart relaxes and refills with blood) is over 90 mmHg. While this threshold has been set to define hypertension, it is for clinical convenience and because achieving targets below this level brings benefits for patients.

The blood flowing inside vessels exerts a force against the walls – this is blood pressure.

Hypertension is classified as either primary (essential) hypertension or secondary hypertension. About 90–95% of cases are categorized as primary hypertension, defined as high blood pressure with no obvious underlying cause. The remaining 5–10% of cases are categorized as secondary hypertension, defined as hypertension due to an identifiable cause, such as chronic kidney disease, narrowing of the aorta or kidney arteries, or an endocrine disorder such as excess aldosterone, cortisol, or catecholamines.
Dietary and lifestyle changes can improve blood pressure control and decrease the risk of health complications, although treatment with medication is still often necessary in people for whom lifestyle changes are not enough or not effective. The treatment of moderately high arterial blood pressure (defined as >160/100 mmHg) with medications is associated with an improved life expectancy. The benefits of treatment of blood pressure that is between 140/90 mmHg and 160/100 mmHg are less clear, with some reviews finding no benefit and others finding benefit.
Signs and symptoms
Hypertension is defined as a systolic blood pressure (SBP) of 140 mm Hg or more, or a diastolic blood pressure (DBP) of 90 mm Hg or more, or taking antihypertensive medication.
Based on recommendations of the Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 7), the classification of BP for adults aged 18 years or older has been as follows :
• Normal: Systolic lower than 120 mm Hg, diastolic lower than 80 mm Hg
• Prehypertension: Systolic 120-139 mm Hg, diastolic 80-89 mm Hg
• Stage 1: Systolic 140-159 mm Hg, diastolic 90-99 mm Hg
• Stage 2: Systolic 160 mm Hg or greater, diastolic 100 mm Hg or greater
Hypertension may be primary, which may develop as a result of environmental or genetic causes, or secondary, which has multiple etiologies, including renal, vascular, and endocrine causes. Primary or essential hypertension accounts for 90-95% of adult cases, and secondary hypertension accounts for 2-10% of cases.
Diagnosis
The evaluation of hypertension involves accurately measuring the patient’s blood pressure, performing a focused medical history and physical examination, and obtaining results of routine laboratory studies. A 12-lead electrocardiogram should also be obtained. These steps can help determine the following :
• Presence of end-organ disease
• Possible causes of hypertension
• Cardiovascular risk factors
• Baseline values for judging biochemical effects of therapy
Other studies may be obtained on the basis of clinical findings or in individuals with suspected secondary hypertension and/or evidence of target-organ disease, such as CBC, chest radiograph, uric acid, and urine microalbumin.
Management
Many guidelines exist for the management of hypertension. Most groups, including the JNC, the American Diabetes Associate (ADA), and the American Heart Association/American Stroke Association (AHA/ASA) recommend lifestyle modification as the first step in managing hypertension.
Lifestyle modifications
JNC 7 recommendations to lower BP and decrease cardiovascular disease risk include the following, with greater results achieved when 2 or more lifestyle modifications are combined :
• Weight loss (range of approximate systolic BP reduction [SBP], 5-20 mm Hg per 10 kg)
• Limit alcohol intake to no more than 1 oz (30 mL) of ethanol per day for men or 0.5 oz (15 mL) of ethanol per day for women and people of lighter weight (range of approximate SBP reduction, 2-4 mm Hg)
• Reduce sodium intake to no more than 100 mmol/day (2.4 g sodium or 6 g sodium chloride; range of approximate SBP reduction, 2-8 mm Hg)
• Maintain adequate intake of dietary potassium (approximately 90 mmol/day)
• Maintain adequate intake of dietary calcium and magnesium for general health
• Stop smoking and reduce intake of dietary saturated fat and cholesterol for overall cardiovascular health
• Engage in aerobic exercise at least 30 minutes daily for most days (range of approximate SBP reduction, 4-9 mm Hg)
The AHA/ASA recommends a diet that is low in sodium, is high in potassium, and promotes the consumption of fruits, vegetables, and low-fat dairy products for reducing BP and lowering the risk of stroke. Other recommendations include increasing physical activity (30 minutes or more of moderate intensity activity on a daily basis) and losing weight (for overweight and obese persons).
The 2013 European Society of Hypertension (ESH) and the European Society of Cardiology (ESC) guidelines recommend a low-sodium diet (limited to 5 to 6 g per day) as well as reducing body-mass index (BMI) to 25 kg/m2 and waist circumference (to < 102 cm in men and < 88 cm in women).

Pharmacologic therapy
If lifestyle modifications are insufficient to achieve the goal BP, there are several drug options for treating and managing hypertension. Thiazide diuretics are the preferred agents in the absence of compelling indications.
Compelling indications may include high-risk conditions such as heart failure, ischemic heart disease, chronic kidney disease, and recurrent stroke, or those conditions commonly associated with hypertension, including diabetes and high coronary disease risk. Drug intolerability or contraindications may also be factors. An angiotensin-converting enzyme (ACE) inhibitor, angiotensin receptor blocker (ARB), calcium channel blocker (CCB), and beta-blocker are all acceptable alternative agents in such compelling cases.
The following are drug class recommendations for compelling indications based on various clinical trials :
• Heart failure: Diuretic, beta-blocker, ACE inhibitor, ARB, aldosterone antagonist
• Postmyocardial infarction: Beta-blocker, ACE inhibitor, aldosterone antagonist
• High coronary disease risk: Diuretic, beta-blocker, ACE inhibitor, CCB
• Diabetes: Diuretic, beta-blocker, ACE inhibitor, ARB, CCB
• Chronic kidney disease: ACE inhibitor, ARB
• Recurrent stroke prevention: Diuretic, ACE inhibitor
CONCLUSION

Hypertension is a leading cause of morbidity and mortality in Africa, and Nigeria, the most populous country in the continent, hugely contributes to this burden. The problems caused by hypertension are made worse, according to WHO, when people are not aware of the necessity for – or unable to afford – regular blood pressure checks. “We hope this campaign will encourage more adults to check their blood pressure but also that health authorities will make blood pressure measurement affordable for everyone,” said Dr Shanthi Mendis, a medical officer at WHO.

REFERENCES
American Diabetes Association. Standards of medical care in
diabetes — 2014.American Diabetes Association. Standards of medical care in diabetes — 2014. Diabetes Care. 2014;37 Suppl 1:S14-S80.
Goldstein LB, Bushnell CD, Adams RJ, et al. Guidelines for the
primary prevention of stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association.Goldstein LB, Bushnell CD, Adams RJ, et al. Guidelines for the primary prevention of stroke: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2011 Feb;42:517-584.
Handler J, et al. 2014 evidence-based guideline for the management
of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8).Handler J, et al. 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the Eighth Joint National Committee (JNC 8). JAMA. 2014 Feb 5;311(5):507-520.
James PA, Oparil S, Carter BL, et al. 2014 Evidence-Based
Guideline for the Management of High Blood Pressure in Adults: Report From the Panel Members Appointed to the Eighth Joint National Committee (JNC 8). JAMA. 2014;311(5):507-520.James PA, Oparil S, Carter BL, et al. 2014 Evidence-Based Guideline for the Management of High Blood Pressure in Adults: Report From the Panel Members Appointed to the Eighth Joint National Committee (JNC 8). JAMA. 2014;311(5):507-520.
Kaplan NM. Systemic hypertension: Treatment. In: Bonow RO,
Mann DL, Zipes DP, Libby P, eds.Kaplan NM. Systemic hypertension: Treatment. In: Bonow RO, Mann DL, Zipes DP, Libby P, eds. Braunwald's Heart Disease: A Textbook of Cardiovascular Medicine. 9th ed. Philadelphia, PA: Saunders Elsevier; 2011:chap 46.
Peterson ED, Gaziano JM, Greenland P. Recommendations for treating hypertension: what are the right goals and purposes?Peterson ED, Gaziano JM, Greenland P. Recommendations for treating hypertension: what are the right goals and purposes? JAMA. 2014 Feb 5;311(5):474-476.
Victor, RG. Systemic hypertension: Mechanisms and diagnosis. In:
Bonow RO, Mann DL, Zipes DP, Libby P, eds.Victor, RG. Systemic hypertension: Mechanisms and diagnosis. In: Bonow RO, Mann DL, Zipes DP, Libby P, eds. Braunwald's Heart Disease: A Textbook of Cardiovascular Medicine. 9th ed. Philadelphia, PA: Saunders Elsevier; 2011:chap 45.