Models used quantitative analysis. General characteristics of qualitative and quantitative methods. types of data and ways to analyze them. Basic concepts of the entity-relationship model

  • 01.06.2020

The concepts of quantitative and qualitative methods in psychology

Defining methods as ways of cognition, S.L. Rubinstein noted that the methodology should be conscious and not turn into a form mechanically imposed on the specific content of science. Consider the question of how cognizant paths in psychology are and how researchers understand and define quantitative and qualitative methods.

As the main psychological methods S.L. Rubinstein in " Fundamentals of General Psychology" names observation, experiment, methods of studying the products of activity. This list does not include quantitative methods.

In the 1970s, the second classification of methods of psychological research, created by B.G. Ananiev.

He distinguishes the following groups of methods:

  1. Organizational;
  2. empirical;
  3. Data processing methods;
  4. Interpretation methods.

Quantitative and qualitative methods were classified as data processing methods. He defines quantitative methods as mathematical and statistical methods of processing psychological information, and qualitative methods are a description of those cases that most fully reflect the types and variants of mental phenomena and are an exception to the general rules.

Classification B.G. Ananiev was criticized by the representative of the Yaroslavl school V.N. Druzhinin, offering his own classification.

By analogy with other sciences, he distinguishes three classes of methods in psychology:

  1. empirical;
  2. Theoretical;
  3. Interpretive.

Qualitative and quantitative methods are also not specified separately in the classification, but it is assumed that they are placed in the section of empirical methods, which differs from the classification of B.G. Ananiev. Significantly supplemented the classification of B.G. Ananyeva, a representative of the Leningrad school of psychologists V.V. Nikandrov. He classifies quantitative and qualitative methods as non-empirical methods in accordance with the criterion of "staged psychological process". The author understands non-empirical methods as “research methods of psychological work outside the contact of the researcher and the individual.

In addition to the remaining differences in the classifications of S.L. Rubinstein and B.G. Ananiev, there are terminological discrepancies in the understanding of quantitative and qualitative methods.

An exact definition of these methods is not given in the works of V.V. Nikandrov. He defines qualitative methods functionally, from the point of view of the result, and calls them:

  1. Classification;
  2. Typology;
  3. Systematization;
  4. periodization;
  5. Psychological casuistry.

He replaces the quantitative method with the definition of quantitative processing, which is mainly aimed at a formal, external study of the object. As synonyms for V.V. Nikandrov uses such expressions as quantitative methods, quantitative processing, quantitative research. The author refers to the main quantitative methods methods of primary and secondary processing.

Thus, the problem of terminological inaccuracy is quite relevant and takes on a new meaning when researchers seek to attribute quantitative methods to the new scientific sections "Psychometry" and "Mathematical Psychology".

Reasons for terminological discrepancies

There are a number of reasons why there is no strict definition of quantitative and qualitative methods in psychology:

  • Quantitative methods within the framework of the domestic tradition have not received an unambiguously strict definition and classification, and this speaks of methodological pluralism;
  • Quantitative and qualitative methods in the tradition of the Leningrad school are considered as a non-empirical stage of research. The Moscow school interprets these methods as empirical and elevates them to the status of a methodological approach;
  • In the terminological confusion of the concepts of quantitative, formal, quantative, mathematical and statistical, there is a conventionalism that has developed in the psychological society regarding the definition of these quantitative and qualitative methods;
  • Borrowing from the American tradition of dividing all methods into quantitative and qualitative methods. Quantitative methods, more precisely research, involve the expression and measurement of results in quantitative terms. Qualitative methods are seen as "humanitarian" research;
  • The definition of an unambiguous place and the ratio of quantitative and qualitative methods most likely leads to the fact that quantitative methods are subordinate to qualitative methods;
  • The modern theory of method moves away from the classification of methods only on one basis and a strict definition of the procedure of the method. Methodologists distinguish three directions in the theory:
    1. Improvement of the traditional empirical model;
    2. Criticism of the empirical quantitative model;
    3. Analysis and testing of alternative research models.
  • Different directions in the development of the theory of method reveal a tendency for researchers to gravitate toward qualitative methods.

Quantitative Methods

The purpose of practical psychology is not to establish patterns, but to understand and describe problems, so it uses both qualitative and quantitative methods.

Quantitative methods are techniques for processing digital information, because they are mathematical in nature. Quantitative methods such as categorized observation, testing, document analysis, and even experiment provide information to diagnose a problem. The efficiency of work is determined at the final stage. The main part of the work - conversations, trainings, games, discussions - is carried out using qualitative methods. From quantitative methods testing is the most popular.

Quantitative methods are widely used in scientific research and in the social sciences, for example, in testing statistical hypotheses. Quantitative methods are used to process the results of mass surveys public opinion. To create tests, psychologists use the apparatus of mathematical statistics.

Methods of quantitative analysis are divided into two groups:

  1. Methods of statistical description. As a rule, they are aimed at obtaining quantitative characteristics;
  2. Methods of statistical inference. They make it possible to correctly extend the obtained results to the entire phenomenon, to draw a conclusion of a general nature.

With the help of quantitative methods, stable trends are identified and their explanations are built.

The disadvantages of the quantitative control method are related to its limitations. These methods of assessing knowledge in the field of teaching psychology can only be used for intermediate control, checking knowledge of terminology, textbook experimental research or theoretical concepts.

Qualitative Methods

Increased interest and popularity, qualitative methods are gaining only recently, which is associated with the demands of practice. In applied psychology, the scope of qualitative methods is very wide:

  • Social psychology carries out humanitarian expertise social programs- pension reform, reform of education, health care - using qualitative methods;
  • Political psychology. Qualitative methods are necessary here to build an adequate and effective election campaign, to form a positive image of politicians, parties, the entire system government controlled. Important here will be not only quantitative indicators of the trust rating, but also the reasons for this rating, ways to change it, etc.
  • With the help of qualitative methods, the psychology of means mass communication Examines the degree of trust in one or another print media, specific journalists, programs.

The decisive role in the development of qualitative methods in psychology, therefore, was played by the need for a dialogue between psychological science and various fields of practical activity.

Qualitative methods are focused on the analysis of information, which is mainly presented in verbal form, so there is a need to compress this verbal information, i.e. obtain it in a more compact form. In this case, coding acts as the main compression technique.

Coding involves the selection of semantic segments of the text, their categorization and reorganization.

Examples of information compression are schemes, tables, diagrams. Thus, coding and visual representation of information are the main methods of qualitative analysis.

To carry out a quantitative analysis of the diagrams, we list the indicators of the model:

Number of blocks on the diagram - N;

Chart decomposition level − L;

Chart balance - AT;

The number of arrows connected to the block is - BUT.

This set of factors applies to each model diagram. The following will list recommendations for the desired values ​​of the chart factors.

It is necessary to strive to ensure that the number of blocks on the diagrams of the lower levels would be lower than the number of blocks on the parent diagrams, i.e. with an increase in the level of decomposition, the coefficient would decrease . Thus, a decrease in this coefficient indicates that as the model is decomposed, the functions should be simplified, therefore, the number of blocks should decrease.

Charts must be balanced. This means that within the framework of one diagram, the situation shown in Fig. 14: Job 1 has significantly more incoming and control arrows than outgoing ones. It should be noted that this recommendation may not be implemented in models describing production processes. For example, when describing an assembly procedure, a block can include many arrows describing the components of a product, and one arrow can exit - the finished product.

Rice. 14. An example of an unbalanced chart

Let's introduce the chart balance factor:

.

It is necessary to strive to K b, was the minimum for the chart.

In addition to the analysis of the graphic elements of the diagram, it is necessary to consider the names of the blocks. To evaluate the names, a dictionary of elementary (trivial) functions of the simulated system is compiled. In fact, the functions of the lower, level decomposition of diagrams should fall into this dictionary. For example, for a database model, the functions “find a record”, “add a record to the database” may be elementary, while the function “user registration” requires further description.

After forming the vocabulary and compiling a package of system diagrams, it is necessary to consider the lower level of the model. If it shows a match between the names of the blocks of diagrams and words from the dictionary, then this indicates that a sufficient level of decomposition has been achieved. The coefficient that quantitatively reflects this criterion can be written as L*C is the product of the model level by the number of matches of block names with words from the dictionary. The lower the level of the model (more L), the more valuable the coincidence.

DFD Methodology

The DFD methodology is based on the construction of a model of the analyzed AIS - designed or actually existing. The main simulation tool functional requirements The system being designed are Data Flow Diagrams (DFD). In accordance with this methodology, a system model is defined as a hierarchy of data flow diagrams. With their help, the requirements are divided into functional components (processes) and presented as a network connected by data flows. the main objective such tools are to demonstrate how each process transforms its inputs into outputs, and to identify the relationships between these processes.

The components of the model are:

Diagrams;

Data Dictionaries;

Process specifications.

DFD Diagrams

Data flow diagrams (DFD - Data Flow Diagrams) are used to describe workflow and information processing. DFD represents a model system as a network of interconnected activities that can be used to more visually display current workflow operations in corporate systems information processing.

DFD describes:

Information processing functions (works, activities);

Documents (arrows, arrows), objects, employees or departments that are involved in information processing;

Tables for storing documents (data store, data store).

BPwin uses Gein-Sarson notation to plot data flow diagrams (Table 4).

Gein–Sarson notation

Table 4

In diagrams, functional requirements are represented by processes and stores connected by data flow.

external entity- material object or individual, i.e. an entity outside the system context that is a source or receiver of system data (for example, a customer, personnel, suppliers, customers, a warehouse, etc.). Her name must contain a noun. It is assumed that the objects represented by such nodes should not participate in any processing.

System and subsystem when building a complex IS model, it can be represented in the general view on the context diagram as one system as a whole, or can be decomposed into a number of subsystems. The subsystem number serves to identify it. In the name field, the name of the system is entered in the form of a sentence with the subject and corresponding definitions and additions.

Processes are intended to produce output streams from input streams in accordance with the action specified by the process name. This name must contain an indefinite verb followed by an object (for example, calculate, check, create, get). The process number serves to identify it, as well as to refer to it within the diagram. This number can be used in conjunction with the diagram number to provide a unique process index throughout the model.

Data streams– mechanisms used to model the transfer of information from one part of the system to another. The flows in the diagrams are represented by named arrows, the orientation of which indicates the direction of information flow. Sometimes information can move in one direction, be processed and returned back to its source. Such a situation can be modeled either by two different flows, or by one - bidirectional.

The abstraction stage in the study of certain physical phenomena or technical objects consists in highlighting their most significant properties and features, presenting these properties and features in such a simplified form that is necessary for subsequent theoretical and experimental research. Such a simplified representation of a real object or phenomenon is called model.

When using models, some data and properties inherent in a real object are deliberately abandoned in order to easily obtain a solution to the problem, if these simplifications only insignificantly affect the results.

Depending on the purpose of the study for the same technical device different models can be used: physical, mathematical, simulation.

A model of a complex system can be represented as a block structure, that is, as a connection of links, each of which performs a certain technical function ( functional diagram ). As an example, consider the generalized transmission system model shown in Figure 1.2.


Figure 1.2 - Generalized model of the information transmission system

Here, a transmitter is understood as a device that converts the message of source A into signals S that best correspond to the characteristics of a given channel. The operations performed by the transmitter may include primary signal generation, modulation, coding, data compression, and so on. The receiver processes the signals X(t) = S(t) + x(t) at the channel output (taking into account the influence of additive and multiplicative interference x) in order to best reproduce (restore) the transmitted message A at the receiving end. A channel (in a narrow sense) is a medium used to transmit signals from a transmitter to a receiver.

Another example of a complex system model is the phase locked loop (PLL) used to stabilize the intermediate frequency (IF) in radio receivers (Figure 1.3).





Figure 1.3 - PLL system model

The system is designed to stabilize the inverter f pch \u003d f c - f g by appropriately changing the frequency of the tunable oscillator (local oscillator) f g when changing the frequency of the signal f c. Frequency f g in turn, will change with the help of a controlled element in proportion to the output voltage of the phase discriminator, depending on the phase difference of the output frequency f and reference oscillator frequency f 0 .

These models make it possible to obtain a qualitative description of the processes, highlight the features of the functioning and performance of the system as a whole, and formulate research objectives. But for a technical specialist, this data, as a rule, is not enough. It is necessary to find out exactly (preferably in numbers and graphs) how well a system or device works, identify quantitative indicators for evaluating efficiency, compare proposed technical solutions with existing analogues to make an informed decision.

For theoretical research To obtain not only qualitative but also quantitative indicators and characteristics, it is necessary to perform a mathematical description of the system, that is, to compile its mathematical model.

Mathematical models can be represented by various mathematical means: graphs, matrices, differential or difference equations, transfer functions, graphic connection of elementary dynamic links or elements, probabilistic characteristics, etc.

Thus, the first main question that arises in the quantitative analysis and calculation electronic devices is the compilation with the required degree of approximation of a mathematical model that describes changes in the state of the system over time.

A graphic representation of the system in the form of a connection of various links, where each link is assigned a mathematical operation (differential equation, transfer function, complex transfer coefficient), is called block diagram . In this case, the main role is played not by the physical structure of the link, but by the nature of the relationship between the input and output variables. In this way, various systems can be dynamically equivalent, and after replacing the functional diagram with a structural one, it is possible to apply general methods for analyzing systems, regardless of the scope, physical implementation, and operating principle of the system under study.

Contradictory requirements are imposed on the mathematical model: on the one hand, it should reflect the properties of the original as fully as possible, and on the other hand, it should be as simple as possible so as not to complicate the study. Strictly speaking, each technical system (or device) is non-linear and non-stationary, containing both lumped and distributed parameters. Obviously, the exact mathematical description of such systems presents great difficulties and is not connected with practical necessity. The success of system analysis depends on how correctly the degree of idealization or simplification is chosen when choosing their mathematical model.

For example, any active resistance ( R) may depend on temperature, have reactive properties at high frequencies. At high currents and operating temperatures, its characteristics become significantly non-linear. At the same time, at normal temperature, at low frequencies, in the small signal mode, these properties can be ignored and the resistance can be considered a non-inertia linear element.

Thus, in a number of cases, with a limited range of parameter changes, it is possible to significantly simplify the model, neglect the nonlinearity of characteristics and the nonstationarity of the values ​​of the parameters of the device under study, which will allow, for example, to analyze it using a well-developed mathematical apparatus for linear systems with constant parameters.

As an example, Figure 1.4 shows a block diagram (a graphic representation of a mathematical model) of a PLL system. With a slight instability of the input signal frequency, the nonlinearity of the characteristics of the phase discriminator and the controlled element can be neglected. In this case mathematical models the functional elements indicated in Figure 1.3 can be represented as linear links described by the corresponding transfer functions.



Figure 1.4 - Structural diagram (graphical representation of the mathematical model) of the PLL system

Design electronic circuits with the help of analysis and optimization programs on a computer, as noted above, it has a number of advantages over the traditional method of designing "manually" with subsequent fine-tuning on a layout. First, with the help of computer analysis programs it is much easier to observe the effect of varying circuit parameters than with the help of experimental studies. Secondly, it is possible to analyze the critical operating modes of the circuit without the physical destruction of its components. Thirdly, analysis programs make it possible to evaluate the operation of the circuit with the worst combination of parameters, which is difficult and not always possible to carry out experimentally. Fourthly, the programs make it possible to carry out such measurements on a model of an electronic circuit that are difficult to perform experimentally in the laboratory.

The use of a computer does not exclude experimental research (and even involves subsequent testing on a mock-up), but it gives the designer a powerful tool that can significantly reduce the time spent on design and reduce the cost of development. A computer gives a particularly significant effect in the design of complex devices (for example, integrated circuits), when it is necessary to take into account a large number of factors affecting the operation of the circuit, and experimental modification is too expensive and laborious.

Despite the obvious advantages, the use of computers has created great difficulties: it is necessary to develop mathematical models of electronic circuit components and create a library of their parameters, improve mathematical methods for analyzing the diverse operating modes of various devices and systems, develop high-performance computing systems, etc. In addition, many tasks turned out to be beyond the control of computers. For most devices, their structure and circuit diagram largely depend on the application area and initial design data, which creates great difficulties in the synthesis circuit diagrams with the help of a computer. In this case, the initial version of the circuit is compiled by the engineer "manually" with subsequent modeling and optimization on a computer. The greatest achievements in the construction of programs for structural synthesis and synthesis of circuit diagrams are in the field of designing matching circuits, analog and digital filters, and devices based on programmable logic arrays (PLM).

When developing a mathematical model a complex system is divided into subsystems, and, for a number of subsystems, mathematical models can be unified and concentrated in the appropriate libraries. Thus, when studying electronic devices using computer simulation programs, a schematic or block diagram is a graphical representation of the components, each of which is associated with a selected mathematical model.

Models of typical independent sources, transistors, passive components, integrated circuits, logic elements are used to study circuit diagrams.

To study systems defined by block diagrams, it is important to indicate the relationship between input and output variables. In this case, the output of any structural component is represented as a dependent source. As a rule, this relationship is given either by a polynomial function or by a rational-fractional transfer function using the Laplace operator. Taking into account the selected function coefficients, it is possible to obtain models of such structural components as an adder, a subtractor, a multiplier, an integrator, a differentiator, a filter, an amplifier, and others.

Modern computer simulation programs contain dozens of types of libraries various models, and each library contains dozens and hundreds of models of modern transistors and microcircuits produced by leading manufacturers. These libraries often make up the bulk of the volume software. At the same time, in the process of modeling, there is the possibility of prompt correction of the parameters of existing models or the creation of new ones.

Quantitative (mathematical-statistical) analysis- a set of procedures, methods for describing and transforming research data based on the use of a mathematical and static apparatus.

Quantitative Analysis implies the ability to treat results as numbers - the application of methods of calculation.

Deciding on quantitative analysis, we can immediately turn to the help of parametric statistics or first carry out primary and secondary data processing.

At the stage of primary processing are solved two main tasks: introduce the obtained data in a visual, convenient form for preliminary qualitative analysis in the form of ordered series, tables and histograms and prepare data for application of specific methods secondary processing.

ordering(arrangement of numbers in descending or ascending order) allows you to highlight the maximum and minimum quantitative value of the results, evaluate which results are most common, etc. A set of indicators of various psychodiagnostic methods obtained for a group is presented in the form of a table, in the rows of which the survey data of one subject are located, and in the columns - the distribution of the values ​​of one indicator over the sample. bar chart is the frequency distribution of the results over a range of values.

At the stage secondary processing the characteristics of the subject of research are calculated. Analysis of results secondary processing allows us to prefer the set of quantitative characteristics that will be most informative. Purpose of the stage secondary processing consists not only in getting information, but also in preparing data for a possible assessment of the reliability of information. In the latter case, we turn to help parametric statistics.

Types of methods of mathematical-static analysis:

Descriptive statistics methods are aimed at describing the characteristics of the phenomenon under study: distribution, communication features, etc.

Static inference methods serve to establish the statistical significance of data obtained during experiments.

Data transformation methods are aimed at transforming data in order to optimize their presentation and analysis.

To quantitative methods of analysis and interpretation (transformation) of data include the following:

Primary processing of "raw" estimates To create the possibility of using nonparametric statistics, two methods are used: classification(separation of objects into classes according to some criterion) and systematization(ordering objects within classes, classes among themselves and sets of classes with other sets of classes).

To carry out a quantitative analysis of the diagrams, we list the indicators of the model:

    number of blocks on the diagram - N;

    diagram decomposition level - L;

    chart balance - AT;

    the number of arrows connected to the block - BUT.

This set of factors applies to each model diagram. The following will list recommendations for the desired values ​​of the chart factors. It is necessary to strive to ensure that the number of blocks on the diagrams of the lower levels would be lower than the number of blocks on the parent diagrams, i.e., with an increase in the decomposition level, the N/L coefficient would decrease. Thus, a decrease in this coefficient indicates that as the model is decomposed, the functions should be simplified, therefore, the number of blocks should decrease. Charts must be balanced. This means that within the framework of one diagram, the situation shown in Fig. 10: Job 1 has significantly more incoming and control arrows than outgoing ones. It should be noted that this recommendation may not be implemented in models describing production processes. For example, when describing an assembly procedure, a block can include many arrows describing the components of a product, and one arrow can exit - the finished product. Let us introduce the balance factor of the diagram. It is necessary to strive so that Kb was the minimum for the chart. In addition to the analysis of the graphic elements of the diagram, it is necessary to consider the names of the blocks. To evaluate the names, a dictionary of elementary (trivial) functions of the simulated system is compiled. In fact, the functions of the lower, level decomposition of diagrams should fall into this dictionary. For example, for a database model, the functions “find a record”, “add a record to the database” may be elementary, while the function “user registration” requires further description. After forming the vocabulary and compiling a package of system diagrams, it is necessary to consider the lower level of the model. If it shows a match between the names of the blocks of diagrams and words from the dictionary, then this indicates that a sufficient level of decomposition has been achieved. The coefficient that quantitatively reflects this criterion can be written as L*C- the product of the model level by the number of matches of block names with words from the dictionary. The lower the level of the model (higher L), the more valuable the matches.

22. Data modeling. ansi-sparc architecture

In the general case, databases have the property of independence from application programs and, as a rule, are represented by three levels of architecture: external, conceptual and physical; Access to the database is carried out using the DBMS.

The architecture we are considering is almost completely consistent with the architecture proposed by the ANSI/SPARC (Study Group on Data Management Systems) research group. The task of the group was to determine if any areas of database technology needed standardization (and if so, which ones) and to develop a set of recommended actions in each of these areas. In the process of working on the tasks set, the group came to the conclusion that the only suitable standardization object is interfaces, and in accordance with this, determined the general architecture, or foundation, of the RDB, and also pointed out the important role of such interfaces. The final report (1978) provided a detailed description of the architecture and some of the 42 specified interfaces.

The architecture divides the SDB into three levels. The perception of data at each of the levels is described using a diagram. Rice. Three levels of ANSI/SPARC architecture

The outer layer is a representation of an individual user. An individual user is only interested in a certain part of the entire database. In addition, the user's perception of this part will certainly be more abstract compared to the chosen way of storing data. The data sublanguage provided to the user is defined in terms of external records (for example, fetching a set of records). employee can be defined as a 6-character field with an employee number, as a field of five decimal digits for storing data about his salary, etc.). A conceptual representation is a representation of all database information in a slightly more abstract form (as in the case of an external representation) compared to a description of the physical way data is stored. The conceptual representation is defined by the conceptual schema. To achieve data independence, it does not include any indication of storage structures or access methods, ordering of stored data, indexing, and so on. Conceptual language definitions should refer only to the content of the information. If the conceptual schema does provide data independence in this sense, then external schemas defined on top of the conceptual schema will certainly provide data independence. A conceptual view is a view of the entire contents of a database, and a conceptual schema is the definition of such a view. The definitions in the conceptual schema can also characterize many different additional aspects of information processing, such as security constraints or data integrity requirements. The inner level is a low-level view of the entire database. An internal record is a stored record. The internal representation is also separate from the physical layer, as it does not consider physical records (commonly referred to as blocks or pages). The internal representation is described using an internal schema that defines not only the types of stored records, but also the existing indexes, how the stored fields are represented, the physical ordering of the records, and so on.

In addition to the elements of the three levels themselves, the architecture under consideration also includes certain mappings: The conceptual-internal mapping establishes a correspondence between the conceptual representation and the stored database, i. describes how conceptual records and fields are represented internally. When the structure of the stored database changes, this mapping also changes, taking into account the fact that the conceptual scheme remains unchanged. In other words, to ensure data independence, the results of any changes to the storage schema should not be discoverable at the conceptual level. This mapping serves as the basis for physical data independence, if users and user programs are immune to changes in the physical structure of the stored database. The external-conceptual mapping defines a mapping between some external representation and a conceptual representation. This mapping serves as the basis for logical data independence, i.e. users and user programs are immune to changes in the logical structure of the database (i.e. changes are implied at the conceptual level). (For example, several conceptual fields can be combined into one external (virtual)). The external-to-external mapping allows one definition of an external representation to be expressed in terms of another without requiring an explicit definition of the mapping of each external representation to the conceptual level.