Methodology is the systematic, theoretical analysis of the methods applied to a field of study. It comprises the theoretical analysis of the body of methods and principles associated with a branch of knowledge. Typically, it encompasses concepts such as paradigm , theoretical model, phases and quantitative or qualitative techniques.
A methodology does not set out to provide solutions—it is therefore, not the same as a method. Instead, a methodology offers the theoretical underpinning for understanding which method, set of methods, or [best practice]s can be applied to a specific case, for example, to calculate a specific result.
The methodology is the general research strategy that outlines the way in which research is to be undertaken and, among other things, identifies the methods to be used in it. These methods , described in the methodology, define the means or modes of data collection or, sometimes, how a specific result is to be calculated. When proper to a study of methodology, such processes constitute a constructive generic framework , and may therefore be broken down into sub-processes, combined, or their sequence changed.
A paradigm is similar to a methodology in that it is also a constructive framework. In theoretical work, the development of paradigms satisfies most or all of the criteria for methodology. Any description of a means of calculation of a specific result is always a description of a method and never a description of a methodology.
It is thus important to avoid using methodology as a synonym for method or body of methods. Doing this shifts it away from its true epistemological meaning and reduces it to being the procedure itself, or the set of tools, or the instruments that should have been its outcome. A methodology is the design process for carrying out research or the development of a procedure and is not in itself an instrument, or method, or procedure for doing things.
Methodology and method are not interchangeable. In recent years, however, there has been a tendency to use methodology as a "pretentious substitute for the word method ". The goal of weather prediction, he realized, is not to know the paths of individual air molecules over time, but to provide the public with "average values over large areas and long periods of time.
In , Bjerknes published a short paper describing what he called "the principle of predictive meteorology" Bjerknes, see the Research links for the entire paper. In it, he says:. Based upon the observations that have been made, the initial state of the atmosphere is represented by a number of charts which give the distribution of seven variables from level to level in the atmosphere.
With these charts as the starting point, new charts of a similar kind are to be drawn, which represent the new state from hour to hour. In other words, Bjerknes envisioned drawing a series of weather charts for the future based on using known quantities and physical principles. He proposed that solving the complex equation could be made more manageable by breaking it down into a series of smaller, sequential calculations, where the results of one calculation are used as input for the next.
As a simple example, imagine predicting traffic patterns in your neighborhood. You start by drawing a map of your neighborhood showing the location, speed, and direction of every car within a square mile. Using these parameters , you then calculate where all of those cars are one minute later.
Then again after a second minute. Your calculations will likely look pretty good after the first minute. After the second, third, and fourth minutes, however, they begin to become less accurate. Other factors you had not included in your calculations begin to exert an influence, like where the person driving the car wants to go, the right- or left-hand turns that they make, delays at traffic lights and stop signs, and how many new drivers have entered the roads.
Trying to include all of this information simultaneously would be mathematically difficult, so, as proposed by Bjerknes, the problem can be solved with sequential calculations. To do this, you would take the first step as described above: Use location, speed, and direction to calculate where all the cars are after one minute.
Next, you would use the information on right- and left-hand turn frequency to calculate changes in direction, and then you would use information on traffic light delays and new traffic to calculate changes in speed.
After these three steps are done, you would solve your first equation again for the second minute time sequence, using location, speed, and direction to calculate where the cars are after the second minute.
Though it would certainly be rather tiresome to do by hand, this series of sequential calculations would provide a manageable way to estimate traffic patterns over time. Although this method made calculations tedious, Bjerknes imagined "no intractable mathematical difficulties" with predicting the weather. The method he proposed but never used himself became known as numerical weather prediction, and it represents one of the first approaches towards numerical modeling of a complex, dynamic system.
Bjerknes' challenge for numerical weather prediction was taken up sixteen years later in by the English scientist Lewis Fry Richardson.
Richardson related seven differential equations that built on Bjerknes' atmospheric circulation equation to include additional atmospheric processes. One of Richardson's great contributions to mathematical modeling was to solve the equations for boxes within a grid; he divided the atmosphere over Germany into 25 squares that corresponded with available weather station data see Figure 4 and then divided the atmosphere into five layers, creating a three-dimensional grid of boxes.
This was the first use of a technique that is now standard in many types of modeling. For each box, he calculated each of nine variables in seven equations for a single time step of three hours. This was not a simple sequential calculation, however, since the values in each box depended on the values in the adjacent boxes, in part because the air in each box does not simply stay there — it moves from box to box. Richardson's attempt to make a six-hour forecast took him nearly six weeks of work with pencil and paper and was considered an utter failure, as it resulted in calculated barometric pressures that exceeded any historically measured value Dalmedico, Probably influenced by Bjerknes, Richardson attributed the failure to inaccurate input data , whose errors were magnified through successive calculations see more about error propagation in our Uncertainty, Error, and Confidence module.
In addition to his concerns about inaccurate input parameters , Richardson realized that weather prediction was limited in large part by the speed at which individuals could calculate by hand. He thus envisioned a "forecast factory," in which thousands of people would each complete one small part of the necessary calculations for rapid weather forecasting.
Richardson's vision became reality in a sense with the birth of the computer, which was able to do calculations far faster and with fewer errors than humans. The computer used for the first one-day weather prediction in , nicknamed ENIAC Electronic Numerical Integrator and Computer , was 8 feet tall, 3 feet wide, and feet long — a behemoth by modern standards, but it was so much faster than Richardson's hand calculations that by , meteorologists were using it to make forecasts twice a day Weart, Over time, the accuracy of the forecasts increased as better data became available over the entire globe through radar technology and, eventually, satellites.
The process of numerical weather prediction developed by Bjerknes and Richardson laid the foundation not only for modern meteorology, but for computer-based mathematical modeling as we know it today. In fact, after Bjerknes died in , the Norwegian government recognized the importance of his contributions to the science of meteorology by issuing a stamp bearing his portrait in Figure 5.
The desire to model Earth's climate on a long-term, global scale grew naturally out of numerical weather prediction. The goal was to use equations to describe atmospheric circulation in order to understand not just tomorrow's weather, but large-scale patterns in global climate, including dynamic features like the jet stream and major climatic shifts over time like ice ages. Initially, scientists were hindered in the development of valid models by three things: Unexpectedly, World War II helped solve one problem as the newly-developed technology of high altitude aircraft offered a window into the upper atmosphere see our Technology module for more information on the development of aircraft.
The jet stream, now a familiar feature of the weather broadcast on the news, was in fact first documented by American bombers flying westward to Japan.
As a result, global atmospheric models began to feel more within reach. In the early s, Norman Phillips, a meteorologist at Princeton University, built a mathematical model of the atmosphere based on fundamental thermodynamic equations Phillips, He defined 26 variables related through 47 equations, which described things like evaporation from Earth's surface , the rotation of the Earth, and the change in air pressure with temperature. In the model, each of the 26 variables was calculated in each square of a 16 x 17 grid that represented a piece of the northern hemisphere.
The grid represented an extremely simple landscape — it had no continents or oceans, no mountain ranges or topography at all.
This was not because Phillips thought it was an accurate representation of reality, but because it simplified the calculations.
He started his model with the atmosphere "at rest," with no predetermined air movement, and with yearly averages of input parameters like air temperature. Phillips ran the model through 26 simulated day-night cycles by using the same kind of sequential calculations Bjerknes proposed.
Within only one "day," a pattern in atmospheric pressure developed that strongly resembled the typical weather systems of the portion of the northern hemisphere he was modeling see Figure 6. In other words, despite the simplicity of the model, Phillips was able to reproduce key features of atmospheric circulation , showing that the topography of the Earth was not of primary importance in atmospheric circulation.
His work laid the foundation for an entire subdiscipline within climate science: By the s, computing power had increased to the point where modelers could incorporate the distribution of oceans and continents into their models. In , the eruption of Mt. Pinatubo in the Philippines provided a natural experiment: How would the addition of a significant volume of sulfuric acid , carbon dioxide, and volcanic ash affect global climate? In the aftermath of the eruption, descriptive methods see our Description in Scientific Research module were used to document its effect on global climate: Worldwide measurements of sulfuric acid and other components were taken, along with the usual air temperature measurements.
Scientists could see that the large eruption had affected climate , and they quantified the extent to which it had done so. This provided a perfect test for the GCMs. Given the inputs from the eruption, could they accurately reproduce the effects that descriptive research had shown?
Within a few years, scientists had demonstrated that GCMs could indeed reproduce the climatic effects induced by the eruption, and confidence in the abilities of GCMs to provide reasonable scenarios for future climate change grew. The validity of these models has been further substantiated by their ability to simulate past events, like ice ages, and the agreement of many different models on the range of possibilities for warming in the future, one of which is shown in Figure 7.
The widespread use of modeling has also led to widespread misconceptions about models , particularly with respect to their ability to predict. Some models are widely used for prediction, such as weather and streamflow forecasts, yet we know that weather forecasts are often wrong. Modeling still cannot predict exactly what will happen to the Earth's climate , but it can help us see the range of possibilities with a given set of changes. All models are also limited by the availability of data from the real system.
Clarke, R. J. () Research Methodologies: 2 Agenda Definition of Research Research Paradigms (a.k.a research philosophy or research model) specifying concepts- phenomena of interest as defined in model, and statements- propositions involving concepts Theories, Methods and Application Domains Classes of Research .
When selecting the research method it is usually advisable to consider whether you can base your work on an earlier theoretical kitchen-profi.mlmes a model, even a preliminary one, can help your work decisively, and in such a .
EPA's air research provides the critical science to develop and implement outdoor air regulations under the Clean Air Act and puts new tools and information in the hands of air quality managers and regulators to protect the air we breathe. Learn more about air research methods, models, tools and. The research question, ethics, budget and time are all major considerations in any design.. This is before looking at the statistics required, and studying the preferred methods for the individual scientific discipline.. Every experimental design must make compromises and generalizations, so the researcher must try to minimize these, whilst .
A research method is a systematic plan for doing research. In this lesson, we'll look at the definition for a research method and examine the four most common research methods . How to choose a research methodology? MSc Business Information Systems Project 1: Applying Research Methodologies ♦Constructs, methods, ♦Models, instantiations, prototypes 2. primary data = input data of a research method secondary data = derived data (resulting from a research method)